What is ehealth ahern




















For the purposes of this paper, eHealth is defined as the use of emerging interactive technologies e. Though still at an early stage of development, the evidence base is growing for these types of technology-based interventions. Despite these potential benefits, there are barriers to the full implementation of eHealth solutions, and the limitations of access, health and technology literacy, and quality measures must be addressed [ 17 , 18 ].

While no single entity or sector originated the idea of harnessing electronic communication technology to address health care issues, purchasers e. In the realm of health behavior change and disease management, there had been an increasing call to explore research methodologies for eHealth evaluation research, how these technologies could be created and adapted to reach traditionally underserved populations, and the formation and implementation of standards for the assessment of interventions [ 19 ].

In , The Robert Wood Johnson Foundation created the Health e-Technologies Initiative, a national program office focused on expanding the body of knowledge about the efficacy, cost-effectiveness, and overall quality of eHealth applications for health behavior change and chronic disease management.

To establish a cohesive set of funding priorities, it was necessary for the Health e-Technologies Initiative to consider perspectives from a broad range of sectors, comparing areas of overlap and addressing controversies.

A series of interviews was conducted among opinion leaders stakeholders in eHealth in order to assess the existing strengths and challenges in eHealth evaluation research for health behavior change and chronic disease management.

From May to September , 38 qualitative interviews were conducted. Each discussion consisted of two interviewers and between one and five participants. Participants were recruited by convenience sampling from designated sectors involved in the development, evaluation, dissemination, or use of eHealth technologies. Interviews were conducted in person whenever possible, but due to geographic limitations, one third of interviews were conducted by telephone. The unit of analysis for this study was each interview session, rather than the individual respondents.

A total of 9 interviews were conducted with developers and researchers, 7 with opinion leaders in information technology, 4 with projects and programs that use IHCs, 4 with health plan representatives, 4 with technology and health care futurists, 3 with physician organizations and provider groups, 2 with purchasers and larger employers, consumer groups, and data collectors, and 1 with a pharmaceutical company.

Participants consented to be audiotape recorded and received copies of their transcribed interviews to modify or edit, as necessary. Interviews lasted approximately 50 minutes. Participants were informed that their individual responses would remain confidential but would be aggregated for future qualitative data analysis and that quotes would not be attributed to individuals unless explicit written consent was obtained prior to doing so. If the participants asked for a definition of eHealth, they were encouraged to offer their own definition, and their comments were not restricted solely to IHCs.

A spectrum of individual, community, and health care applications were discussed according to sector, but the line of inquiry focused primarily on issues of quality in the development and evaluation of IHCs geared toward health behavior change or chronic disease management due to the nature of the questions being asked.

Transcripts were read line-by-line and coded for primary categories using NVIVO qualitative analysis software version 2. Frequent or related categories were grouped and identified as second- or third-level codes. As relationships between codes became evident, themes began to emerge. Table 1 provides an overview of the relative emphasis of topic area by stakeholder category.

There was universal frustration with the lack of comparability and standardization within the domain of eHealth. Stakeholders expressed a strong desire for a coordinated, rigorous effort to define and integrate the field.

Researchers, as well as purchasers, need criteria for identifying quality information, sharing and comparing findings, and building upon current evidence in order to move eHealth forward. The dearth of consensus and standardization in development and evaluation activities often appeared implicitly in stakeholder discussions of other topics and themes cited throughout this paper; many of the challenges identified by stakeholders pointed toward the larger incongruities surrounding the field of eHealth.

In order to standardize measures and ensure comparable results, an overarching paradigm must be well defined. Stakeholders were troubled by the broad, amorphous definitions of eHealth and behavior modification. At the time the interviews were conducted, professional organizations such as the Disease Management Association of America were beginning to issue guidelines and recommendations for determining the value of these interventions [ 20 , 21 ], and these efforts were highly valued by the researchers in this sample.

More recent publications have continued to address the varying meanings of the word eHealth [22—26]. The stakeholders explained the relative importance, from their perspective, of refining process and outcome measures, determining the optimal study designs to capture these factors, and the relevance of the eHealth research environment to interactive applications already being disseminated in health care and commercial industry.

These results align very closely with the issues raised in an editorial in this journal that was published shortly before the interviews were conducted. It is difficult to determine the degree to which this article, and any surrounding discussions in the literature, influenced the responses, particularly since no interviews occurred prior to its publication [ 27 ]. The stakeholders discussed the challenges associated with measuring usage, particularly traffic and utilization, using quantitative and qualitative methods.

Process measures provide insight into influences on utilization and can explain associations between differential attrition and outcome status [ 28 ]. Identifying and accurately measuring variances within the length of delay that users experience when trying to access the Internet, the time a user spends on a page, which components of the program are used more than others, and the validity of responses to online questionnaires were examples of process measures cited by the stakeholders.

There was a concern expressed among stakeholders that if the delivery mechanisms are not well understood and validated, the outcome results will be difficult to interpret.

Without process refinement, randomized controlled trial results may not be accurate and could threaten the credibility, perceived effectiveness, and, ultimately, the uptake of these technologies. Only researchers and developers commented on process measures in any level of detail and were mainly concerned that, from their perspective, quality design was not emphasized by funders and purchasers.

Process measures help those designing interventions understand user interests and learning styles, which greatly impacts the program uptake and effectiveness. Users who are actively engaged in eHealth applications may benefit more than those who interact in a superficial way with the program.

Developers and researchers expressed an interest in the education literature, particularly its research on methods of learning, in guiding the creation of applications that are appealing and relevant to users. Collaborations between educational researchers and eHealth developers may facilitate the construction of well-designed, effective instructional programs that can adapt to individual styles of learning.

A major criticism of current data collection methods was that they do not distinguish among usage behaviors. For example, if tracking reveals that a Web page is viewed for an extended period of time, it does not tell evaluators how long a user is interacting with the page, or if the user is even sitting at the computer. Commonly used measures including hits, time on page, number of log-ins all have disadvantages, and at the time of the interviews, no ideal measure or measures of usage had emerged as an optimal industry standard.

While there was a sense of dissatisfaction with process measures, they were viewed as fundamentally important to building an effective intervention, and their role in development and evaluation should be as highly regarded as outcome measures. Ultimately, the credibility and value of eHealth lies in its ability to demonstrate positive outcome effects. It was universally understood that funders and purchasers expect proof that an intervention is effective, although there was uncertainty as to what level of rigor was sufficient.

It is difficult to determine quality outcome measures, especially when constrained by short follow-up periods. In lieu of long-term clinical outcomes which require follow-up years later, and few studies have been performed on eHealth applications or population-level measures of impact i. Evaluating behavioral components addressed by IHCs was considered to be a major challenge. Instruments that have been validated to measure behavior change have often not been validated for the evaluation of online interventions and therefore were considered too general.

Qualitative, self-report, and Likert scales were named as helpful in obtaining certain types of information, but objective evidence of behavior change was preferred over self-reported measures or patient satisfaction ratings. The extent to which process and outcome are intertwined was a consistent theme among developers, researchers, and IT opinion leaders, but was also recognized by the other stakeholders as well. Patient, health plan, and physician representatives were particularly conscious of the importance of user satisfaction, which may reflect the proximity of these stakeholders to patients and their perceived quality of care from their doctors and health insurers.

Time and expense were the most consistently, emphatically cited challenges to rigorous evaluation. Researchers and developers were particularly frustrated with the separateness of funding streams for development and evaluation activities.

While accepting of the tension that often exists between what they want to discover and their obligations to fit within the parameters of a grant, researchers and developers find it more challenging to reconcile the choice they often face between allocating limited resources time, money, personnel to either development or evaluation. When required to choose, development is favored, with the rationale that it is pointless to evaluate poorly constructed interventions.

There are caveats to setting the minimum bar at the level of randomized controlled trials. If this design is considered to be the only acceptable methodology, there was concern that the rate of research will be too slow to keep up with development.

Without process refinement, randomized trial results may not be accurate, and stakeholders were concerned that questionable results may threaten the credibility of eHealth:. Alternative, potentially more practical methods include usability and case-control designs, which more easily align with implementation timelines.

Stakeholders were unable to propose solutions to major sampling challenges associated with Internet research:. As with the development of mail and telephone surveys in previous decades, online surveys and recruitment strategies need to be validated.

For example, they highlighted the need to prevent multiple responses from a single user through internal filtering mechanisms, particularly when incentives were offered to survey participants.

The increasing presence of the Internet in the daily lives of individuals [ 29 ] may make it increasingly difficult to recruit controls who do not have some baseline exposure to similar eHealth programs, and to prevent contamination.

Additionally, due to the stratification of information technology access along socioeconomic lines, evaluation results of eHealth applications may be particularly prone to bias if the sample does not accurately represent the target population:.

Those whom eHealth applications may benefit most must be represented in sample selection. Stakeholders contended that these individuals might be those who have little or no access to other sources of care.

If a sample is not representative of these users, but instead is made up of participants who, overall, have higher access to health care, to eHealth tools, to healthy lifestyle choices and preventive care due to higher socioeconomic status, researchers may encounter problems in demonstrating the effects of eHealth applications.

Therefore, it is crucial that sampling methods continue to be refined and validated in order to accurately determine the efficacy of eHealth in the populations it has the potential to reach. Creators of interventions felt intense pressure to develop products that are efficacious and usable from the beginning and are palatable to the public and physicians.

However, stakeholders were aware that end users and some purchasers are not necessarily as concerned with evidence-based proof of effectiveness. All stakeholders were concerned about the dearth of quality control or regulatory entities concerning eHealth, and many recommended a rating system to distinguish legitimate online sites from ones that are merely attractive or popular. Connecting patients with the right resources is a huge challenge.

As a component of health care, it was unanimously held that these applications should be tested and ranked in terms of quality in a similar fashion as other treatment regimens. The controversy concerned the identification of methodologies that are necessary and realistic to reconcile the demands of good science and consumer interest.

In Study 2 Open Recruitment , we posted invitation messages on Web discussion boards, Usenet forums, and one specialized recruitment website, and attempted a snowball recruiting strategy.

Results: In Study 1 defined community recruitment , emails were successfully delivered. Only 5 subjects 0. In Study 2 open recruitment , the number of users seeing the advertisement is unknown. Another 5 were recruited from the general Internet community 3 from discussion boards and 2 from the Research Volunteers website. The remaining 9 participants were recruited through friend referrals with the snowball strategy. Conclusions: Overall, the recruitment rate was disappointingly low.

The results of this survey will be published in the Journal of Medical Internet Research. Edited by G Eysenbach; This is a non—peer-reviewed article.

Skip to Main Content Skip to Footer. Article Authors Cited by 58 Tweetations Metrics. Letter to the Editor. References Eysenbach G. What is e-health? What is e-health 2 : the death of telemedicine? What is eHealth 3 : a systematic review of published definitions. J Med Internet Res ;7 1. What is eHealth 5 : a research agenda for eHealth through stakeholder consultation and policy context review. What is eHealth 6 : perspectives on the evolution of eHealth research.



0コメント

  • 1000 / 1000