![]() At the core of this challenge is the ability to reliably identify clinically equivalent research-grade patient cohorts. To achieve this, the data used for research need not only be of high-quality, but methods associated with its use need be transparent and reproducible to ensure that any findings can be validated by the research community and generalised to other populations. Learning Health Systems require high-quality, routinely collected electronic health record (EHR) data to drive analytics and research, and translate the outputs of novel techniques such as machine learning into patient care and service improvement. Our approach is shown to ensure the portability of phenotype definitions and thus contributes to the transparency of resulting studies. ![]() To evaluate our model, we determine its impact on the portability of both code-based (COVID-19) and logic-based (diabetes) definitions, in the context of key datasets, including 26,406 patients at North-western University. To address this issue, we propose a new multi-layer, workflow-based model for defining phenotypes, and a novel authoring architecture, Phenoflow, that supports the development of these structured definitions and their realisation as computable phenotypes. ![]() However, un-clear definitions, with little information about how best to implement the definition in practice, hinder this process. In order to enhance the portability of a phenotype definition across institutions, it is often defined abstractly, with implementers expected to realise the phenotype computationally before executing it against a dataset. Phenotyping is an effective way to identify cohorts of patients with particular characteristics within a population.
0 Comments
Leave a Reply. |