How do you use Twitter in medical education? A new article outlines “how to” tips

This past week I posted my first tweet.


I feel like an old man writing “how I walked uphill both ways to school in 5ft of snow”. For many of you, I’m sure I sound like I may have just recently adopted electricity and the wheel…but I figured I should join the masses and test out this “new” technology.

I’m not sure what’s taken me so long to move to the Twitterverse but part of me was still trying to figure out its utility. I guess I wasn’t so sure how I could use something such as Twitter…especially since all my entire impression of the technology was that it existed as a bulletin board for the latest celebrity breakups, hookups or feuds. I figured a few episodes of Entertainment Tonight should suffice as a Twitter replacement and I wouldn’t need this new technology…

However, I thought that there must be some way this can be effective within medicine…I’ve come across some physicians in Toronto who’ve started to use it. So before joining I followed along for a little bit to see how they used it. They often tweeted about new articles or cutting edge technologies…it seemed quite up to date and a great way to follow all that was new in medicine!

Then I came across a great article that was just published in Medical Teacher by a few medical educators in Calgary. They summarized 12 Tips for using Twitter in medical education. For anyone who teaches or is involved in medical education I highly recommend reading this paper. It provides practical reasons for Twitter and nicely summarizes how it has been described in the medical literature!

Not trying to steal the thunder of the authors but wanting to share a few of their tips…In the spirit of the wiki mindset which now pervades our consciousness, I’ve posted a few below. Enjoy!

I’ve picked the ones I thought were best and added a few comments or paraphrased the authors.

  1. Use a twitter account for a specific class or group: be sure to set some ground rules so that learners will have a framework for the discussion
  2. Use a live Twitter chat in your next lecture: I’ve been to a few lectures recently where this was done and it’s really quite interesting. What’s especially cool is if people from outside the classroom tweet a comment! The beauty is that they can be anywhere else in the world. If you’re using it for questions, it might be best to only open it up near the end of the lecture or at least only post it on the projector during a dedicated time as it may serve as distraction rather than an effective tool.
  3. Tweet key resources or new literature for your students to use and read: This is an excellent way to flip the classroom. Have them follow along and get them the material before class so that they can read it, digest it then come to class or academic day and discuss & analyze it. Or simply provide a resource for them to access the latest articles that you’re reading.
  4. Use twitter for real-time feedback: If you can make it anonymous this could be pretty cool. It could be posted in real-time at the end of the lecture or course. Though the logistics of creating anonymous usernames may limit its utility…unless they’re ok with identifiable responses.
  5. Maximize the power of Twitter with emphasis efficient communication: Twitter’s benefits include having only 140 characters to post high yield information. Use this to your advantage in teaching your students concise summaries for case presentations, etc…
  6. Twitter as a tool for self & group reflection:  I love this idea. I think it presents a novel way to gather feedback and one which many learners are comfortable using.
  7. Informal polls & quizzes: I think this is a good option though there may be a better app out there called Socrative which I’ve blogged about previously.
  8. Use it as subject for further study: There’s little out there regarding this topic and could be an outstanding resident research project! I definitely agree with the authors that further study is needed. Most importantly the authors specifically state that valuable studies would not compare Twitter to no intervention but rather evaluate how best to integrate this powerful technology.

I’m looking forward to seeing Twitter become increasingly used and studied within medical education! How will you use it?

Source: SE Forgie et al. Twelve tips for using Twitter as a learning tool in medical education. Medical Teacher 2012 [Epub ahead of print]

Recent JAMA editorial “Silencing the Science on Gun Research”

While I don’t usually use this blog for political viewpoints, I did want to share this remarkable editorial regarding the gun lobby and medical research.

If you have been at all following the unimaginable tragedy that occurred recently in Newtown, Conn then this article in JAMA (Journal of American Medical Association) is worth the read. It highlights how even the medical community has been suppressed by the powerful gun lobby. If only there was such a coordinated effort for more noble causes like an HIV vaccine, solving global poverty or world peace.

A recap on our first case-based learning session at Auckland Rescue Helicopter Trust

Auckland HEMS

This past week, we conducted the first ARHT case-based learning session for the duty crew!

While “case-based learning” may seem like a bunch of educational jargon…it can be rephrased to “sit around the table, discuss a previous job and consider the “what if” “.

We assembled the team for the day which included the crewman, paramedic and doctor for a 45-50 minute session in the board room.  A huge thanks to Russell C, Leon, and Scott O. who all participated and they generated a great discussion about several aspects of this case. (next time we’ll be looking to get our pilot involved too!)

I had the opportunity to facilitate the session which was based on a relatively straight forward job that I had selected. The job involved a patient with a head injury and the focus was on the management of traumatic brain injury in the pre-hospital setting…

View original post 309 more words

Is that patient with LBBB having a STEMI? a new algorithm provides some guidance

Traditional teaching would suggest that any new or presumed new left bundle branch block (LBBB) on ECG in a patient with chest pain (or chest pain equivalent) should be managed like an ST-elevation MI.  There has been considerable amount of published literature on the subject.  A well-known but somewhat controversial set of criteria (Sgarbossa) was established to aid the clinician in differentiating between the patients with LBBB that require and don’t require urgent ACS management (thrombolytics or emergent coronary catheterization). Studies have suggested that the Sgarbossa criteria are far from perfect with a sensitivity of 78% and a specificity of 90%.

In the pre-hospital setting, protocols have been implemented that suggest a new/presumed new LBBB in a suspected ACS should be treated like a STEMI. But recent data do not support this universal approach as many of these patients with suspected ACS and LBBB are not having an acute MI. So treating these patients as if they’re having a STEMI could put them at an unnecessary risk either from the complications of thrombolysis or coronary cathetherization. In pre-hospital settings where thrombolytics are actually administered, appropriate diagnosis is essential. In our setting in Auckland we’re not administering pre-hospital thrombolytics but this discussion remains important. Especially since we notify our receiving hospital that we have a patient who with suspected ACS (possibly STEMI). Furthermore, if we suspect a STEMI, we will re-route to a centre capable of performing coronary catheterizations. Such decisions may occur while also considering weather factors and time to hospital.

A recent publication has proposed a very reasonable algorithm to aid the clinican in managing the patient with suspected ACS and a LBBB. Check it out for yourself.

published by Neeland IJ et al. JACC 2012 Jul 10;60(2):96-105

published by Neeland IJ et al. JACC 2012 Jul 10;60(2):96-105

Maybe this is what’s being done already…but its finally nice to see something in the cardiology literature that provides a bit more guidance in managing these challenging patients. This is something that all ED physicians can keep on the smartphones and at least use this when they speak with the cardiologist on-call. Interestingly, the authors then go on to suggest the demotion of the class I level of evidence that new LBBB be treated as a STEMI equivalent.

Basically, this would remove suspected ACS + new LBBB as an automatic STEMI equivalent thus having considerable impact on the pre-hospital and in-hospital management of these patients.


Flying shotgun on the way home from Hamilton, NZ

I don’t usually get to fly shotgun in the helicopter because of our seating configurations in the helicopter. Usually the crewman, paramedic and I are in the back on the way to the patient. While the cameraman sits upfront. (yes, we have a cameraman on board since our rescue service is the subject of a reality TV show that is filmed & broadcast in NZ) …I wouldn’t say its quite the same as Jersey shore, but an interesting depiction of what we do. And when we transport the patient, both the paramedic and I will remain in the back as well. Obviously focused on the patient, so rarely do we get to enjoy the view.

But once we bring the patient to hospital then sometimes if I’m lucky I can get an upgrade to the front seat with the pilot…free of charge.

Here’s a few pics of our recent flight from Hamilton to Auckland.

Just a few miles from where they filmed Lord of the Rings. This is actually "Middle Earth"!

Just a few miles from where they filmed Lord of the Rings. This is actually “Middle Earth”! If you look closely, you can see two Hobbits having a pint. 

Control panel in a BK117. Very good thing I'm not in charge in flying this thing!

Control panel in a BK117. Very good thing I’m not in charge in flying this thing!

The Auckland coastline

The Auckland coastline


I was looking down with envy at one of Auckland's golf courses...

I was looking down with envy at one of Auckland’s golf courses…

If you look closely you can see a few guys walking with their golf bags. They just landed in their private chopper at the dream would be to fly by helicopter to an amazing golf course. These guys did NOT look angry!

If you look closely you can see a few guys walking with their golf bags. They just landed in their private chopper at the helipad…my dream would be to fly by helicopter to an amazing golf course. These guys did NOT look like they had a tough day at the office!

Sleep + Medical Residents = no change in patient outcomes?

As concern mounts about resident sleep schedules and the potential impact on patient care, errors and all other potentially a bad things, studies have begun looking at the impact of sleep on residents. The challenge has been to correlate this with patient-oriented outcomes and this study suffers the same problem. Inadequately powered to assess whether the intervention of providing more sleep to residents would benefit patients.

sleep tweet

This study, just published in JAMA took about 100 internal medicine residents and randomized them to either a standard resident schedule which includes 30hr shifts or a protected sleep period (1230-0530). They looked at a whole list of outcomes but the primary one being “sleep time”. The also looked at alertness, numbers of sleepless nights, subjective “sleepiness” and also some patient outcomes likes readmission rates, ICU transfers and length of stay.

Now for the results

In a shocking twist of events…residents that were randomized to mandated sleep periods actually got more sleep! (vs traditional scheduled residents). I’m not sure why this is a surprise…they were forced to give their work cell phones to someone who would cover for them…I figure this is akin to testing whether an apple is more likely to fall to the ground in a place where gravity exists vs. a zero-gravity zone. What was actually interesting however, those residents with a “protected sleep period” of 5hrs still only got about 3 hours of sleep compared to 2hrs in the other group. Still not great…I can only imagine the size of a study required to show patient oriented outcomes when there’s only 1hr of sleep difference between groups.

In an outcome that was touted highly by the authors, the “mandated sleep” group resulted in residents being more alert the next morning. This was tested using a few fancy tests including sleepiness scales and psychomotor vigiliance (whatever that means!??!)

In conclusion…this study showed feasibility, increased amount of sleep, and more alert residents who got more sleep.

What we need to know however is how this impacts resident learning…did they miss any critical learning opportunities? What if these were surgical residents who would have missed a once-in-a-lifetime surgery? How much does this program cost? And really, we need to find out if patients are served better in such cases…that is the holy grail of this area of study. Another study which doesn’t quite provide us the answers we need.

Ironically I’m writing this at 1am…Anyways here’s the abstract

Effect of a protected sleep period on hours slept during extended overnight in-hospital duty hours among medical interns

Context  A 2009 Institute of Medicine report recommended protected sleep periods for medicine trainees on extended overnight shifts, a position reinforced by new Accreditation Council for Graduate Medical Education requirements.

Objective  To evaluate the feasibility and consequences of protected sleep periods during extended duty.

Design, Setting, and Participants  Randomized controlled trial conducted at the Philadelphia VA Medical Center medical service and Oncology Unit of the Hospital of the University of Pennsylvania (2009-2010). Of the 106 interns and senior medical students who consented, 3 were not scheduled on any study rotations. Among the others, 44 worked at the VA center, 16 at the university hospital, and 43 at both.

Intervention  Twelve 4-week blocks were randomly assigned to either a standard intern schedule (extended duty overnight shifts of up to 30 hours; equivalent to 1200 overnight intern shifts at each site), or a protected sleep period (protected time from 12:30 AM to 5:30 AM with handover of work cell phone; equivalent to 1200 overnight intern shifts at each site). Participants were asked to wear wrist actigraphs and complete sleep diaries.

Main Outcome Measures  Primary outcome was hours slept during the protected period on extended duty overnight shifts. Secondary outcome measures included hours slept during a 24-hour period (noon to noon) by day of call cycle and Karolinska sleepiness scale.

Results  For 98.3% of on-call nights, cell phones were signed out as designed. At the VA center, participants with protected sleep had a mean 2.86 hours (95% CI, 2.57-3.10 hours) of sleep vs 1.98 hours (95% CI, 1.68-2.28 hours) among those who did not have protected hours of sleep (P < .001). At the university hospital, participants with protected sleep had a mean 3.04 hours (95% CI, 2.77-3.45 hours) of sleep vs 2.04 hours (95% CI, 1.79-2.24) among those who did not have protected sleep (P < .001). Participants with protected sleep were significantly less likely to have call nights with no sleep: 5.8% (95% CI, 3.0%-8.5%) vs 18.6% (95% CI, 13.9%-23.2%) at the VA center (P < .001) and 5.9% (95% CI, 3.1%-8.7%) vs 14.2% (95% CI, 9.9%-18.4%) at the university hospital (P = .001). Participants felt less sleepy after on-call nights in the intervention group, with Karolinska sleepiness scale scores of 6.65 (95% CI, 6.35-6.97) vs 7.10 (95% CI, 6.85-7.33; P = .01) at the VA center and 5.91 (95% CI, 5.64-6.16) vs 6.79 (95% CI, 6.57-7.04;P < .001) at the university hospital.

Conclusions  For internal medicine services at 2 hospitals, implementation of a protected sleep period while on call resulted in an increase in overnight sleep duration and improved alertness the next morning.

Wanna get me some smarter…medical knowledge among residents


A recent article was just published about “assessing medical knowledge of emergency medicine residents”. This group systematically reviewed a list of educational tools to assess medical knowledge among EM residents. For example, some of the tools they reviewed were Multiple Choice Questions, In-training exams, direct observation, USMLE and OSCEs.  The authors looked at each method and described the existing literature about it’s effectiveness. While they did a nice review of what’s out there, their conclusions were disappointing. A call for further research…blah blah blah… but little call for true change. Anyways, I’ve put a bit of my own editorial below based on the article.

Apparently, in non-EM specialties, passing your board exams has been shown to translate to improved patient outcomes. WOW! I mean, if that isn’t evidence that we should continue board exams in their current format, I don’t know what is (please read sarcastically). There doesn’t appear to be data to suggest that better scores equals better patient outcomes, but even if it did, it could be easily subject to confounders. One would expect that a proportion of conscientious, smart physicians who spend a good deal of time learning to pass an exam will also apply that same conscientious attitude to their patients (even if they don’t necessarily apply the knowledge!).

What is fascinating is the next sentence in the paper…”we were unable to locate any data on the effect ABEM certification has on patient-centered outcomes“. In an era where we’ve begun seeking patient-centered outcomes, we have NO data about whether our ONLY means of accrediting staff physicians has any impact on patients in emergency medicine. Impressive…to say the least!

I remain puzzled as to why we continue to utilize the board exam in its current format (slightly different between US & Canadian but similar idea), where we require residents to recall ridiculous amounts of irrelevant material and long lists that will never be used clinically. We cling to this assessment format as if there was evidence to support it! And we reject other means of assessment because the evidence is lacking? There’s increasing evidence that we can replicate the stress of situations in a simulation setting – wouldn’t this be a great place to evaluate an emergency medicine training?

“Cognitive psychology has demonstrated that facts and concepts are best recalled and put into service when they are taught, practiced, and assessed in the context in which they will be used” (Cooke M et al NEJM 2006;355:1339-44)

Or what about more interactive, case-based formats? What we need to get away from is making residents regurgitate what is in a textbook. I can find the answer to a 10-item list in 10 seconds with a functional internet connection and a keyboard. I agree that emergent situations do require memory and recall, but a substantial portion of what we do would not fall under this category. A nice quote from an article written more than 10 years ago…yet little has changed!

“With knowledge so easily accessible, physicians in training as well as practicing physicians can depend less upon their own memories and more upon external memory devices” (Irby & Wilkerson J Gen Intern Med 2003;18:370-376)

Now I should probably be careful regarding my opinions…given I haven’t yet written my board exams! However, as we become increasingly surrounded by technology and immediately accessible “knowledge”, it’s time to evaluate medical trainees in a manner that will reflect their practice.

What’s the value of a “gut feeling” in medicine? Maybe alot!

In acute care medicine, we spend considerable time and energy studying diagnostic tests that will aid us in finding that potentially dangerous diagnosis. If we look at the recent literature of patient’s with acute chest pain who present to the emergency department, millions of dollars have been spent on improved diagnostic tests that will improve our ability to determine who had a heart attack (or is at least at risk) and who didn’t. gutfeeling

In medicine, we love facts, we love evidence. We seek abnormal results that will help us explain the patient’s symptoms. Using a complex (but not well understood process) we integrate the patient’s history, the physical examination and then selected tests to help determine whether or not the patient has a serious illness. In general, this process is the formulation of a clinical impression.

More recently, clinicians are being taught about the potential biases that can creep into our clinical judgement and how to prevent or deal with them. Maybe you are more likely to attribute a diagnosis (incorrectly) to a patient if you saw a similar presentation last week. Or maybe fatigue will lead you incorrectly down the wrong path. Or perhaps you’re more likely to attribute a benign diagnosis to a patient because it’s more common and neglect some key aspect of the presentation (e.g burning chest pain MUST be heartburn…).

In general, we as physicians will formulate a clinical impression based on the evidence presented to us. We then develop a pre-test probability and make decisions for appropriate testing afterwards.

Well there’s a fascinating new study in the BMJ that looks at the role of “gut feeling” of physicians in the diagnosis of serious pediatric infections. Interestingly, the authors differentiated “gut feeling” from clinical impression:

  • clinical impression was defined as “a subjective observation that the illness was serious on the basis of the history, observation and clinical examination”
  • in contrast a gut feeling was defined as “an intuitive feeling that something was wrong even if the clinician was unsure why”

They authors (from Belgium) studied more than 3000 pediatric patients (0-16yrs) who presented to their primary care provider. These physicians were asked to provide an overall “clinical impression” as well as their “gut feeling” about whether a serious infection was present.  Among the patient’s who were assessed clinically to not having a serious infection, a gut feeling that there may be something wrong anyways was associated with significant increase risk of serious illness (Likelihood ratio of 25). For the non-statistically inclined, anything about 10 is very predictive!  And when the gut feeling was that there was no serious illness, then the probability decreased from 0.2 to 0.1%. This study is fascinating and highlights some potential for educators in curriculum design within medical education. We should not discount our “gut feelings” and likely trainees should be educated in how to manage such feelings. The authors summarize the implications of the study for future training quite nicely:

“Although students and trainees are taught to look at children’s overall appearance and breathing, there seems to be a potential gap between the routine clinical assessment of these features and the more holistic response, producing a “something is wrong” gut feeling. Perhaps we should also be more explicit in encouraging sensitivity to parental concern, stressing that it does make the presence of serious illness more likely even when clinical examination is reassuring. We should certainly make clear when teaching that an inexplicable (or not fully explicable) gut feeling is an important diagnostic sign and a good reason for seeking the opinion of someone with more expertise or scheduling a review of the child.” 

Van den Bruel et al. Clinicians’ gut feeling about serious infections in children: observational study. BMJ 2012; 345:e6144.

Objective To investigate the basis and added value of clinicians’ “gut feeling” that infections in children are more serious than suggested by clinical assessment.

Design Observational study.

Setting Primary care setting, Flanders, Belgium.

Participants Consecutive series of 3890 children and young people aged 0-16 years presenting in primary care.

Main outcome measures Presenting features, clinical assessment, doctors’ intuitive response at first contact with children in primary care, and any subsequent diagnosis of serious infection determined from hospital records.

Results Of the 3369 children and young people assessed clinically as having a non-severe illness, six (0.2%) were subsequently admitted to hospital with a serious infection. Intuition that something was wrong despite the clinical assessment of non-severe illness substantially increased the risk of serious illness (likelihood ratio 25.5, 95% confidence interval 7.9 to 82.0) and acting on this gut feeling had the potential to prevent two of the six cases being missed (33%, 95% confidence interval 4.0% to 100%) at a cost of 44 false alarms (1.3%, 95% confidence interval 0.95% to 1.75%). The clinical features most strongly associated with gut feeling were the children’s overall response (drowsiness, no laughing), abnormal breathing, weight loss, and convulsions. The strongest contextual factor was the parents’ concern that the illness was different from their previous experience (odds ratio 36.3, 95% confidence interval 12.3 to 107).

Conclusions A gut feeling about the seriousness of illness in children is an instinctive response by clinicians to the concerns of the parents and the appearance of the children. It should trigger action such as seeking a second opinion or further investigations. The observed association between intuition and clinical markers of serious infection means that by reflecting on the genesis of their gut feeling, clinicians should be able to hone their clinical skills.