Open Access Publications
Permanent URI for this collection
Open access publications by faculty, postdocs, and graduate students in the Department of Computer and Information Sciences.
Browse
Browsing Open Access Publications by Issue Date
Now showing 1 - 20 of 50
Results Per Page
Sort Options
Item Multiple-Layer Visibility Propagation-Based Synthetic Aperture Imaging through Occlusion(MDPI AG, 2015-08-04) Yang, Tao; Li, Jing; Yu, Jingyi; Zhang, Yanning; Ma, Wenguang; Tong, Xiaomin; Yu, Rui; Ran, Lingyan; Tao Yang, Jing Li, Jingyi Yu, Yanning Zhang, Wenguang Ma, Xiaomin Tong, Rui Yu and Lingyan Ran; Yu, Jingyi; Ma, WenguangHeavy occlusions in cluttered scenes impose significant challenges to many computer vision applications. Recent light field imaging systems provide new see-through capabilities through synthetic aperture imaging (SAI) to overcome the occlusion problem. Existing synthetic aperture imaging methods, however, emulate focusing at a specific depth layer, but are incapable of producing an all-in-focus see-through image. Alternative in-painting algorithms can generate visually-plausible results, but cannot guarantee the correctness of the results. In this paper, we present a novel depth-free all-in-focus SAI technique based on light field visibility analysis. Specifically, we partition the scene into multiple visibility layers to directly deal with layer-wise occlusion and apply an optimization framework to propagate the visibility information between multiple layers. On each layer, visibility and optimal focus depth estimation is formulated as a multiple-label energy minimization problem. The layer-wise energy integrates all of the visibility masks from its previous layers, multi-view intensity consistency and depth smoothness constraint together. We compare our method with state-of-the-art solutions, and extensive experimental results demonstrate the effectiveness and superiority of our approach.Item pGenN, a Gene Normalization Tool for Plant Genes and Proteins in Scientific Literature(PLOS (Public Library of Science), 2015-08-10) Ding, Ruoyao; Arighi, Cecilia N.; Lee, Jung-Youn; Wu, Cathy H.; Vijay-Shanker, K.; Ruoyao Ding, Cecilia N. Arighi, Jung-Youn Lee, Cathy H. Wu, K. Vijay-Shanker; Ding, Ruoyao; Arighi, Cecilia N.; Lee, Jung-Youn; Wu, Cathy H.; Vijay-Shanker, K.BACKGROUND Automatically detecting gene/protein names in the literature and connecting them to databases records, also known as gene normalization, provides a means to structure the information buried in free-text literature. Gene normalization is critical for improving the coverage of annotation in the databases, and is an essential component of many text mining systems and database curation pipelines. METHODS In this manuscript, we describe a gene normalization system specifically tailored for plant species, called pGenN (pivot-based Gene Normalization). The system consists of three steps: dictionary-based gene mention detection, species assignment, and intra species normalization. We have developed new heuristics to improve each of these phases. RESULTS We evaluated the performance of pGenN on an in-house expertly annotated corpus consisting of 104 plant relevant abstracts. Our system achieved an F-value of 88.9%(Precision 90.9% and Recall 87.2%) on this corpus, outperforming state-of-art systems presented in BioCreative III. We have processed over 440,000 plant-related Medline abstracts using pGenN. The gene normalization results are stored in a local database for direct query from the pGenN web interface (proteininformationresource.org/pgenn/). The annotated literature corpus is also publiclyItem pGenN, a Gene Normalization Tool for Plant Genes and Proteins in Scientific Literature(Public Library of Science, 2015-08-10) Ding, Ruoyao; Arighi, Cecilia N.; Lee, Jung-Youn; Wu, Cathy H.; Vijay-Shanker, K.; Ruoyao Ding, Cecilia N. Arighi, Jung-Youn Lee, Cathy H. Wu, K. Vijay-Shanker; Dina, Ruoyao; Arighi, Cecilia N.; Lee, Jung-Youn; Wu, Cathy H.; Vijay-Shanker, K.BACKGROUND Automatically detecting gene/protein names in the literature and connecting them to databases records, also known as gene normalization, provides a means to structure the information buried in free-text literature. Gene normalization is critical for improving the coverage of annotation in the databases, and is an essential component of many text mining systems and database curation pipelines. METHODS In this manuscript, we describe a gene normalization system specifically tailored for plant species, called pGenN (pivot-based Gene Normalization). The system consists of three steps: dictionary-based gene mention detection, species assignment, and intra species normalization. We have developed new heuristics to improve each of these phases. RESULTS We evaluated the performance of pGenN on an in-house expertly annotated corpus consisting of 104 plant relevant abstracts. Our system achieved an F-value of 88.9%(Precision 90.9% and Recall 87.2%) on this corpus, outperforming state-of-art systems presented in BioCreative III. We have processed over 440,000 plant-related Medline abstracts using pGenN. The gene normalization results are stored in a local database for direct query from the pGenN web interface (proteininformationresource.org/pgenn/). The annotated literature corpus is also publicly available through the PIR text mining portal (proteininformationresource. org/iprolink/).Item miRTex: A Text Mining System for miRNAGene Relation Extraction(PLOS (Public Library of Science), 2015-09-25) Li, Gang; Ross, Karen E.; Arighi, Cecilia N.; Peng, Yifan; Wu, Cathy H.; Vijay-Shanker, K.; Gang Li, Karen E. Ross, Cecilia N. Arighi, Yifan Peng, Cathy H. Wu, K. Vijay-Shanker; Li, Gang; Ross, Karen E.; Arighi, Cecilia N.; Peng, Yifan; Wu, Cathy H.; Vijay-Shanker, K.MicroRNAs (miRNAs) regulate a wide range of cellular and developmental processes through gene expression suppression or mRNA degradation. Experimentally validated miRNA gene targets are often reported in the literature. In this paper, we describe miRTex, a text mining system that extracts miRNA-target relations, as well as miRNA-gene and gene-miRNA regulation relations. The system achieves good precision and recall when evaluated on a literature corpus of 150 abstracts with F-scores close to 0.90 on the three different types of relations. We conducted full-scale text mining using miRTex to process all the Medline abstracts and all the full-length articles in the PubMed Central Open Access Subset. The results for all the Medline abstracts are stored in a database for interactive query and file download via the website at http://proteininformationresource.org/mirtex. Using miRTex, we identified genes potentially regulated by miRNAs in Triple Negative Breast Cancer, as well as miRNA-gene relations that, in conjunction with kinase-substrate relations, regulate the response to abiotic stress in Arabidopsis thaliana. These two use cases demonstrate the usefulness of miRTex text mining in the analysis of miRNA-regulated biological processes.Item DiMeX: A Text Mining System for Mutation- Disease Association Extraction(Public Library of Science, 2016-04-13) Mahmood, A. S. M. Ashique; Wu, Tsung-Jung; Mazumder, Raja; Vijay-Shanker, K.; A. S. M. Ashique Mahmood, Tsung-Jung Wu, Raja Mazumder, K. Vijay-Shanker; Mahmood, A. S. M. Ashique; Vijay-Shanker, K.The number of published articles describing associations between mutations and diseases is increasing at a fast pace. There is a pressing need to gather such mutation-disease associations into public knowledge bases, but manual curation slows down the growth of such databases. We have addressed this problem by developing a text-mining system (DiMeX) to extract mutation to disease associations from publication abstracts. DiMeX consists of a series of natural language processing modules that preprocess input text and apply syntactic and semantic patterns to extract mutation-disease associations. DiMeX achieves high precision and recall with F-scores of 0.88, 0.91 and 0.89 when evaluated on three different datasets for mutation-disease associations. DiMeX includes a separate component that extracts mutation mentions in text and associates them with genes. This component has been also evaluated on different datasets and shown to achieve state-of-the-art performance. The results indicate that our system outperforms the existing mutation-disease association tools, addressing the low precision problems suffered by most approaches. DiMeX was applied on a large set of abstracts from Medline to extract mutation-disease associations, as well as other relevant information including patient/cohort size and population data. The results are stored in a database that can be queried and downloaded at http:// biotm.cis.udel.edu/dimex/.We conclude that this high-throughput text-mining approach has the potential to significantly assist researchers and curators to enrich mutation databases.Item Protein-protein interaction prediction based on multiple kernels and partial network with linear programming(BioMed Central, 2016-08-01) Huang, Lei; Liao, Li; Wu, Cathy H.; Lei Huang, Li Liao and Cathy H. Wu; Huang, Lei; Liao, Li; Wu, Cathy H.BACKGROUND: Prediction of de novo protein-protein interaction is a critical step toward reconstructing PPI networks, which is a central task in systems biology. Recent computational approaches have shifted from making PPI prediction based on individual pairs and single data source to leveraging complementary information from multiple heterogeneous data sources and partial network structure. However, how to quickly learn weights for heterogeneous data sources remains a challenge. In this work, we developed a method to infer de novo PPIs by combining multiple data sources represented in kernel format and obtaining optimal weights based on random walk over the existing partial networks. RESULTS: Our proposed method utilizes Barker algorithm and the training data to construct a transition matrix which constrains how a random walk would traverse the partial network. Multiple heterogeneous features for the proteins in the network are then combined into the form of weighted kernel fusion, which provides a new "adjacency matrix" for the whole network that may consist of disconnected components but is required to comply with the transition matrix on the training subnetwork. This requirement is met by adjusting the weights to minimize the element-wise difference between the transition matrix and the weighted kernels. The minimization problem is solved by linear programming. The weighted kernel fusion is then transformed to regularized Laplacian (RL) kernel to infer missing or new edges in the PPI network, which can potentially connect the previously disconnected components. CONCLUSIONS: The results on synthetic data demonstrated the soundness and robustness of the proposed algorithms under various conditions. And the results on real data show that the accuracies of PPI prediction for yeast data and human data measured as AUC are increased by up to 19 % and 11 % respectively, as compared to a control method without using optimal weights. Moreover, the weights learned by our method Weight Optimization by Linear Programming (WOLP) are very consistent with that learned by sampling, and can provide insights into the relations between PPIs and various feature kernel, thereby improving PPI prediction even for disconnected PPI networks.Item Random sampling and model competition for guaranteed multiple consensus sets estimation(Sage Publications Inc., 2017-01-02) Li, Jing; Yang, Tao; Yu, Jingyi; Jing Li, Tao Yang and Jingyi Yu; Yu, JingyiRobust extraction of consensus sets from noisy data is a fundamental problem in robot vision. Existing multimodel estimation algorithms have shown success on large consensus sets estimations. One remaining challenge is to extract small consensus sets in cluttered multimodel data set. In this article, we present an effective multimodel extraction method to solve this challenge. Our technique is based on smallest consensus set random sampling, which we prove can guarantee to extract all consensus sets larger than the smallest set from input data. We then develop an efficient model competition scheme that iteratively removes redundant and incorrect model samplings. Extensive experiments on both synthetic data and real data with high percentage of outliers and multimodel intersections demonstrate the superiority of our method.Item Effective biomedical document classification for identifying publications relevant to the mouse Gene Expression Database (GXD)(Oxford University Press., 2017-03-24) Jiang, Xiangying; Ringwald, Martin; Blake, Judith; Shatkay, Hagit; Xiangying Jiang, Martin Ringwald, Judith Blake and Hagit Shatkay; ; Jiang, Xiangying; Shatkay, HagitThe Gene Expression Database (GXD) is a comprehensive online database within the Mouse Genome Informatics resource, aiming to provide available information about endogenous gene expression during mouse development. The information stems primarily from many thousands of biomedical publications that database curators must go through and read. Given the very large number of biomedical papers published each year, automatic document classification plays an important role in biomedical research. Specifically, an effective and efficient document classifier is needed for supporting the GXD annotation workflow. We present here an effective yet relatively simple classification scheme, which uses readily available tools while employing feature selection, aiming to assist curators in identifying publications relevant to GXD. We examine the performance of our method over a large manually curated dataset, consisting of more than 25 000 PubMed abstracts, of which about half are curated as relevant to GXD while the other half as irrelevant to GXD. In addition to text from title-and-abstract, we also consider image captions, an important information source that we integrate into our method. We apply a captions-based classifier to a subset of about 3300 documents, for which the full text of the curated articles is available. The results demonstrate that our proposed approach is robust and effectively addresses the GXD document classification. Moreover, using information obtained from image captions clearly improves performance, compared to title and abstract alone, affirming the utility of image captions as a substantial evidence source for automatically determining the relevance of biomedical publications to a specific subject area.Item A comprehensive analysis of the integration of team research between sport psychology and management(Psychology of Sport and Exercise, 2020-06-13) Emich, Kyle J.; Norder, Kurt; Lu, Li; Sawhney, AmanBoth sports and organizations rely on teams. As such, the sport psychology and management literatures have contributed greatly to our understanding of team functioning. Despite this, previous reviews based on subsets of articles in these literatures indicate a lack of communication between them. In this article, we assess the state of integration between the entirety of the sport psychology and management literatures on teams by considering the full set of interconnected team articles in the SCOPUS database (6974 articles over 69 years). We use this data to conduct a combination of citation network analysis and content analysis via topic modeling to evaluate conceptual integration. The data show that interdisciplinary discussion between these two fields is lacking, particularly regarding the integration of sport psychology into management research. Whereas 7% of references to team articles in sport psychology come from management journals, only 0.6% of team references in management journals come from sport psychology. Despite this, longitudinal analysis indicates that in the last 10 years the rate of integration between these fields is increasing. We identify specific topics that have accounted for this integration and suggest topics ripe for future integration.Item emiRIT: a text-mining-based resource for microRNA information(Database, 2021-05-28) Roychowdhury, Debarati; Gupta, Samir; Qin, Xihan; Arighi, Cecilia N.; Vijay-Shanker, K.microRNAs (miRNAs) are essential gene regulators, and their dysregulation often leads to diseases. Easy access to miRNA information is crucial for interpreting generated experimental data, connecting facts across publications and developing new hypotheses built on previous knowledge. Here, we present extracting miRNA Information from Text (emiRIT), a text-miningbased resource, which presents miRNA information mined from the literature through a user-friendly interface. We collected 149 ,233 miRNA –PubMed ID pairs from Medline between January 1997 and May 2020. emiRIT currently contains ‘miRNA –gene regulation’ (69 ,152 relations), ‘miRNA disease (cancer)’ (12 ,300 relations), ‘miRNA –biological process and pathways’ (23, 390 relations) and circulatory ‘miRNAs in extracellular locations’ (3782 relations). Biological entities and their relation to miRNAs were extracted from Medline abstracts using publicly available and in-house developed text-mining tools, and the entities were normalized to facilitate querying and integration. We built a database and an interface to store and access the integrated data, respectively. We provide an up-to-date and user-friendly resource to facilitate access to comprehensive miRNA information from the literature on a large scale, enabling users to navigate through different roles of miRNA and examine them in a context specific to their information needs. To assess our resource’s information coverage, we have conducted two case studies focusing on the target and differential expression information of miRNAs in the context of cancer and a third case study to assess the usage of emiRIT in the curation of miRNA information.Item Utilizing image and caption information for biomedical document classification(Bioinformatics, 2021-07-12) Li, Pengyuan; Jiang, Xiangying; Zhang, Gongbo; Trabucco, Juan Trelles; Raciti, Daniela; Smith, Cynthia; Ringwald, Martin; Marai, G. Elisabeta; Arighi, Cecilia; Shatkay, HagitMotivation: Biomedical research findings are typically disseminated through publications. To simplify access to domain-specific knowledge while supporting the research community, several biomedical databases devote significant effort to manual curation of the literature—a labor intensive process. The first step toward biocuration requires identifying articles relevant to the specific area on which the database focuses. Thus, automatically identifying publications relevant to a specific topic within a large volume of publications is an important task toward expediting the biocuration process and, in turn, biomedical research. Current methods focus on textual contents, typically extracted from the title-and-abstract. Notably, images and captions are often used in publications to convey pivotal evidence about processes, experiments and results. Results: We present a new document classification scheme, using both image and caption information, in addition to titles-and-abstracts. To use the image information, we introduce a new image representation, namely Figure-word, based on class labels of subfigures. We use word embeddings for representing captions and titles-and-abstracts. To utilize all three types of information, we introduce two information integration methods. The first combines Figure-words and textual features obtained from captions and titles-and-abstracts into a single larger vector for document representation; the second employs a meta-classification scheme. Our experiments and results demonstrate the usefulness of the newly proposed Figure-words for representing images. Moreover, the results showcase the value of Figure-words, captions and titles-and-abstracts in providing complementary information for document classification; these three sources of information when combined, lead to an overall improved classification performance. Availability and implementation: Source code and the list of PMIDs of the publications in our datasets are available upon request.Item COVID-19 Knowledge Graph from semantic integration of biomedical literature and databases(Bioinformatics, 2021-10-06) Chen, Chuming; Ross, Karen E.; Gavali, Sachin; Cowart, Julie E.; Wu, Cathy H.The global response to the COVID-19 pandemic has led to a rapid increase of scientific literature on this deadly disease. Extracting knowledge from biomedical literature and integrating it with relevant information from curated biological databases is essential to gain insight into COVID-19 etiology, diagnosis and treatment. We used Semantic Web technology RDF to integrate COVID-19 knowledge mined from literature by iTextMine, PubTator and SemRep with relevant biological databases and formalized the knowledge in a standardized and computable COVID-19 Knowledge Graph (KG). We published the COVID-19 KG via a SPARQL endpoint to support federated queries on the Semantic Web and developed a knowledge portal with browsing and searching interfaces. We also developed a RESTful API to support programmatic access and provided RDF dumps for download.Item A Bifactor Approximation Algorithm for Cloudlet Placement in Edge Computing(IEEE Transactions on Parallel and Distributed Systems, 2021-11-15) Bhatta, Dixit; Mashayekhy, LenaEmerging applications with low-latency requirements such as real-time analytics, immersive media applications, and intelligent virtual assistants have rendered Edge Computing as a critical computing infrastructure. Existing studies have explored the cloudlet placement problem in a homogeneous scenario with different goals such as latency minimization, load balancing, energy efficiency, and placement cost minimization. However, placing cloudlets in a highly heterogeneous deployment scenario considering the next-generation 5G networks and IoT applications is still an open challenge. The novel requirements of these applications indicate that there is still a gap in ensuring low-latency service guarantees when deploying cloudlets. Furthermore, deploying cloudlets in a cost-effective manner and ensuring full coverage for all users in edge computing are other critical conflicting issues. In this article, we address these issues by designing a bifactor approximation algorithm to solve the heterogeneous cloudlet placement problem to guarantee a bounded latency and placement cost, while fully mapping user applications to appropriate cloudlets. We first formulate the problem as a multi-objective integer programming model and show that it is a computationally NP-hard problem. We then propose a bifactor approximation algorithm, ACP, to tackle its intractability. We investigate the effectiveness of ACP by performing extensive theoretical analysis and experiments on multiple deployment scenarios based on New York City OpenData. We prove that ACP provides a (2,4)-approximation ratio for the latency and the placement cost. The experimental results show that ACP obtains near-optimal results in a polynomial running time making it suitable for both short-term and long-term cloudlet placement in heterogeneous deployment scenarios.Item A crowdsourcing open platform for literature curation in UniProt(PLOS Biology, 2021-12-06) Wang, Yuqi; Wang, Qinghua; Huang, Hongzhan; Huang, Wei; Chen, Yongxing; McGarvey, Peter B.; Wu, Cathy H.; Arighi, Cecilia N.The UniProt knowledgebase is a public database for protein sequence and function, covering the tree of life and over 220 million protein entries. Now, the whole community can use a new crowdsourcing annotation system to help scale up UniProt curation and receive proper attribution for their biocuration work.Item Unsupervised ECG Analysis: A Review(IEEE Reviews in Biomedical Engineering, 2022-02-28) Nezamabadi, Kasra; Sardaripour, Neda; Haghi, Benyamin; Forouzanfar, MohamadElectrocardiography is the gold standard technique for detecting abnormal heart conditions. Automatic analysis of electrocardiogram (ECG) can help physicians in the interoperation of the large amount of data produced daily by cardiac monitors. As the successful application of supervised machine learning algorithms relies on unprecedented amounts of labeled training data, there is a growing need for unsupervised algorithms for ECG analysis. Unsupervised learning aims to partition ECG into distinct abnormality classes without cardiologist-supplied labelsa process referred to as ECG clustering. In addition to abnormality detection, ECG clustering can discover inter and intra-individual patterns that carry valuable information about the whole body and mind, such as emotions and mental disorders. ECG clustering can also resolve specific challenges facing supervised learning systems, such as the imbalanced data problem, and can enhance biometric systems. While several reviews exist on supervised ECG analysis, a comprehensive review of unsupervised ECG analysis techniques is still lacking. This study reviews recent ECG clustering techniques with the focus on machine learning and deep learning algorithms. We critically review and compare these techniques, discuss their applications and limitations, and provide future research directions. This review provides further insights into ECG clustering and presents the necessary information required to adopt the appropriate algorithm for a specific application.Item Automated Identification of Uniqueness in JUnit Tests(ACM Transactions on Software Engineering and Methodology, 2022-05-24) Wu, Jianwei; Clause, JamesIn the context of testing, descriptive test names are desirable because they document the purpose of tests and facilitate comprehension tasks during maintenance. Unfortunately, prior work has shown that tests often do not have descriptive names. To address this limitation, techniques have been developed to automatically generate descriptive names. However, they often generated names that are invalid or do not meet with developer approval. To help address these limitations, we present a novel approach to extract the attributes of a given test that make it unique among its siblings. Because such attributes often serve as the basis for descriptive names, identifying them is an important first step towards improving test name generation approaches. To evaluate the approach, we created a prototype implementation for JUnit tests and compared its output with human judgment. The results of the evaluation demonstrate that the attributes identified by the approach are consistent with human judgment and are likely to be useful for future name generation techniques.Item SpecKriging: GNN-based Secure Cooperative Spectrum Sensing(IEEE Transactions on Wireless Communications, 2022-06-14) Zhang, Yan; Li, Ang; Li, Jiawei; Dianqi, Han; Li, Tao; Zhang, Rui; Zhang, YanchaoCooperative spectrum sensing (CSS) adopted by spectrum-sensing providers (SSPs) plays a key role for dynamic spectrum access and is essential for avoiding interference with licensed primary users (PUs). A typical SSP system consists of geographically distributed spectrum sensors which can be compromised to submit fake spectrum-sensing reports. In this paper, we propose SpecKriging, a new spatial-interpolation technique based on Inductive Graph Neural Network Kriging (IGNNK) for secure CSS. In SpecKriging, we first pretrain a graphical neural network (GNN) model with the historical sensing records of a few trusted anchor sensors. During system runtime, we use the trained model to evaluate the trustworthiness of non-anchor sensors’ data and also use them along with anchor sensors’ new data to retrain the model. SpecKriging outputs trustworthy sensor reports for spectrum-occupancy detection. To the best of our knowledge, SpecKriging is the first work that explores GNNs for trustworthy CSS and also incorporates the hardware heterogeneity of spectrum sensors. Extensive experiments confirm the high efficacy and efficiency of SpecKriging for trustworthy spectrum-occupancy detection even when malicious spectrum sensors constitute the majority.Item Out-of-Domain Generalization From a Single Source: An Uncertainty Quantification Approach(IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022-06-20) Peng, Xi; Qiao, Fengchun; Zhao, LongWe are concerned with a worst-case scenario in model generalization, in the sense that a model aims to perform well on many unseen domains while there is only one single domain available for training. We propose Meta-Learning based Adversarial Domain Augmentation to solve this Out-of-Domain generalization problem. The key idea is to leverage adversarial training to create “fictitious” yet “challenging” populations, from which a model can learn to generalize with theoretical guarantees. To facilitate fast and desirable domain augmentation, we cast the model training in a meta-learning scheme and use a Wasserstein Auto-Encoder to relax the widely used worst-case constraint. We further improve our method by integrating uncertainty quantification for efficient domain generalization. Extensive experiments on multiple benchmark datasets indicate its superior performance in tackling single domain generalization.Item A Hybrid Blockchain-Edge Architecture for Electronic Health Record Management with Attribute-based Cryptographic Mechanisms(IEEE Transactions on Network and Service Management, 2022-06-24) Guo, Hao; Li, Wanxin; Nejad, Mark; Shen, Chien-ChungThis paper presents a hybrid blockchain-edge architecture for managing Electronic Health Records (EHRs) with attribute-based cryptographic mechanisms. The architecture introduces a novel attribute-based signature aggregation (ABSA) scheme and multi-authority attribute-based encryption (MA-ABE) integrated with Paillier homomorphic encryption (HE) to protect patients’ anonymity and safeguard their EHRs. All the EHR activities and access control events are recorded permanently as blockchain transactions. We develop the ABSA module on Hyperledger Ursa cryptography library, MA-ABE module on OpenABE toolset, and blockchain network on Hyperledger Fabric. We measure the execution time of ABSA’s signing and verification functions, MA-ABE with different access policies and homomorphic encryption schemes, and compare the results with other existing blockchain-based EHR systems. We validate the access activities and authentication events recorded in blockchain transactions and evaluate the transaction throughput and latency using Hyperledger Caliper. The results show that the performance meets real-world scenarios’ requirements while safeguarding EHR and is robust against unauthorized retrievals.Item A house divided: A multilevel bibliometric review of the job search literature 1973–2020(Journal of Business Research, 2022-07-02) Norder, Kurt; Emich, Kyle; Kanar, Adam; Sawhney, Aman; Behrend, Tara S.A growing body of research across multiple disciplines has aimed to better understand the phenomenon of job search. However, little empirical research has examined the combined content and structure of the job search literature to accumulate programmatic knowledge. Unfortunately, this has resulted in redundancies and isolated advances that harm our ability to make concrete practical recommendations to aid policy makers, organizations, and broader society. Using bibliometric analysis of 3,197 articles on job search, the present article identifies and describes 10 distinct communities of thought and assesses patterns of integration between these communities. Assessment of community relationships confirms disciplinary divides, but reveals insights into patterns of thought within disciplines, and structural and conceptual relationships between them. Based on these findings, we offer a multilevel conceptual framework to organize the job search literature and suggest possible ways to improve its integration to build a more programmatic understanding of the job search phenomenon.
- «
- 1 (current)
- 2
- 3
- »