Category:

The Evolution of Microservices

August 11th, 2022 by

Evolution is a constant feature of the Information Technology (IT) Industry. The environment is in constant flux. The landscape constantly changing. New opportunities arise for nimble, better-adapted systems to fill new gaps in the new ecosystem, and existing inhabitants get squeezed, possibly even out of existence. When confronted with such a changing environment, you evolve, or you get replaced. When the landscape changes as quickly as it does in IT systems, you must constantly be wary of becoming obsolete. That’s just one of the facts of life that makes IT so exciting.

This evolution is a factor even at the micro levels. Experienced programmers have a concept known as Code Smell. They recognise just by looking at some section of code, and seeing how it relates to the whole, that something smells fishy. They notice perhaps that somewhere, a fundamental design principle is being violated, or they find some anachronistic code relating to some vestigial, broken functionality. Such code is said to smell, and it will trouble any developer working in that section of code because they will not be able to trust their instincts. Things which should work, will break, and to get things to work, a principle might need to be bent a little. It’s a sure sign that refactoring is required. If left too long, or not quarantined appropriately, it will most certainly cause contagion, and eventually lead to some catastrophic failure. In the face of such pressures, companies lose their best developers, who begin to move on to more cutting-edge technologies. Then things spiral down quickly, and the smell of decay pervades. Yes, sometimes systems develop a bad smell simply because the world has moved on.

A well-built system resembles a well-pruned tree. Perfect in balance and form. But it immediately comes under pressure from new demands, and can lose structure as it transforms organically to meet them. Perhaps there’s a requirement to move to new platform, provide mobile access, or maybe add some new technology like voice recognition. Soon all that balance is lost, and it becomes difficult to navigate through the dense growth. It too soon begins to smell.

When the smell gets bad enough, it’s an opportune time to re-evaluate fundamental designs and consider refactoring. And when you refactor, it’s important to examine the current landscape and be cognisant of emerging technologies and architectures.

Some pressures are so profound, that drastic measures are required to maintain structure. Some of these changes are universal in nature, so that all systems must shift to accommodate it. One such change was the move to cloud, and how existing systems dealt with this demand would have profound effects on their future prospects.

The pressure to move to the cloud was to a great part driven by procurement difficulties. Reacting to anticipated demand used to take months, as forecasts were examined, equipment procured, installed & tested, only perhaps to be left idle as demand failed to materialise, or for services to degrade where demand was unexpectedly high. The cloud offered a seemingly inexhaustible supply of virtual hardware. Procurement took only moments, and when demand was low, anything surplus to requirements could be relinquished. Companies like Amazon and Google were becoming leaders in the latest methodologies for high availability. If your Data Centre Infrastructure did not form part of your core business value, then there was no sense in trying to compete, so many chose to migrate to the public cloud.

In many cases, existing systems were just fork-lifted onto the virtual platforms with minimal modification. Yes, they were on the cloud, but they were never designed to leverage cloud architecture to its potential and would soon become more unstable as they tried to respond to new demands. The landscape had changed drastically. The ubiquitous nature of the cloud, meant new opportunities like mobile access, became desirable. Legacy systems struggled to respond. Some, particularly those designed around a Service Oriented Architecture (SOA), fared a little better.

SOA, when seen at a very high level, is built around a paradigm that involves identifying individual services for a system that fulfil a business need, and then building those services with a well-defined interface, so that they can communicate with each-other to perform some business function. The paradigm prioritised, among other things, interoperability, flexibility and evolutionary refinement, all of which enabled it to easily adapt to the cloud. The level of granularity is generally just enough to perform some business function, though finer granularity is adopted if these services share functionality. Communication was typically via an Enterprise Service Bus (ESB), and together these services worked in unison to provide a complete system. By having the system architected in this way, individual services could adapt more agilely to the new infrastructure, and so SOA quickly became the architecture of choice for Cloud Applications.

To understand what a profound change moving to the cloud was for companies that were previously managing their own data centres, imagine provisioning a hospital for the next 50 years. Initially wards would be empty, but towards the end of its lifespan, the hospital would be bursting at the seams. The sweet-spot, where demand is perfectly matched, is short-lived. The hospital would spend most of its lifetime either under-utilised and expensive, or swamped and under-performing. Now imagine you could provision a smaller hospital instead, just for the short term. One that you could bulk up as required. Couple that with the fact that technology is constantly advancing and providing even more options. Now, when your emergency department is overrun, and you’ve bulked up as much as you can, you can spawn a completely new hospital next door in a matter of hours to cope with demand, with load distributed between them. But as you marvel at how awesome that is, you can’t help but wonder if you really needed that second helipad, with its own helicopter and crew. And you begin to notice a strange smell and wonder if there’s a better way. Well, for a while there wasn’t a better way, but that would soon change, as cloud technologies continued to mature. DevOps disciplines advanced with improved tools and simplified infrastructure. Alternative data persistence technologies came onstream. New lightweight messaging and lightweight runtime technologies became available, along with auto-scaling. The landscape had changed so much, that it was time to look again at the wasteful practices of cloning complete systems to respond to localised demands and to re-examine whether SOA was still the architecture of choice for Cloud Applications.

Typical Monolithic Web Application

Breaking things down into manageable pieces, decoupling them, and making them independent has been the hallmark of good object-oriented design since its inception. These new technologies meant that it was no longer necessary to bundle all of the services together into a single monolithic application in order for them to function well together. Instead, each component, could be developed independently. They could leverage the latest and most appropriate data persistence schemes for their task. By utilising new lightweight runtimes, these components could deploy in under a second for immediate response to increased demand, with efficient communication between services using lightweight messaging protocols. Their independently bounded context meant that language, runtime and datastore could be chosen specifically for the task at hand. Services, now smaller and self-contained, could be built with faster iteration cycles, and better DevOps tools meant that the more complicated deployment could be automated. The age of microservices had begun, with the promise of greater agility, scalability and resilience.

Typical Microservice Configuration

There are some that consider these new Microservices just to be the current manifestation of SOA. After all, SOA is designed to evolve and the same basic principles are being adhered to, but people said similar things about SOA when it first emerged onto the scene many years ago. Basic principles have always applied. Systems based on SOA principles have matured to a point where they are readily recognisable as services communicating with each-other via an ESB. The shape of a system designed around microservice architecture is much more decentralised. SOA primarily focuses on enterprise scale, whereas Microservice Architecture is at application scale. There is a continuum, as systems evolved to leverage new technologies and adapt to new environments, and it can be hard to pinpoint an exact location where microservices began, but that’s the same for all evolving things, whether it’s birds & dinosaurs or humans & apes.

Microservices are not the same as those services that were exposed via APIs within SOA. Rather they are independent standalone components that perform a function and can be deployed independently. Well-designed microservice architectures demand that each service manage its own data. This gives a microservices based system a very different shape. Splitting a system into completely independent components is extremely difficult, and distributing systems in this way comes with its own set of problems. For example, many business transactions, will likely affect more than one single service (with many disparate databases being modified), so maintaining consistency becomes more difficult. Imagine, for example an order failing at the last hurdle because a credit card was refused, even after the stock had been removed from the virtual shelf. Everything would need to be undone, and the stock placed back on the shelf. As microservice architecture matures, standard patterns have emerged for dealing with such difficulties (Richardson, 2018), and distributed systems have existed for a long time, and so the difficulties are well understood.

Luckily, microservice architecture’s flexibility, means it can coexist quite happily within other architectures, so migration can be a gradual process. As existing legacy systems become stale and unmanageable, they can be replaced, and where appropriate, replaced by microservices.

Of course, there are still legacy monolithic systems out there in the wild. Some still in regular use, like a well-engineered Victorian railway bridge. And where they exist, they hamper development of modern, faster, leaner systems. They are a product of their time. Once marvelled, now they just annoy commuters that wonder why the high-speed train must slow down to a crawl in their vicinity. A digital Colosseum. Built for chariots, while all around people drive Ferraris.

Owners of these systems might protest that they are under constant development. Well, my mother’s broom is at least thirty years old and is used every day. Of course she’s changed the head about 10 times and the handle at least twice, but it’s the same broom. Amazon Web Services (AWS) was launched in 2002. Following this, the digital world went through tumultuous change. If your system is older than that, and you haven’t fundamentally changed your architecture to cope, then the writing is on the wall. You’ve probably already noticed the smell. If you are observing resistance to adapt, loss of equilibrium, an accumulation of problems, flickering performance, or just general system weirdness, then you need to begin anticipating a critical transition (Scheffer, et al., n.d.), and for standard monolithic systems, with their inherent levels of homogeneity and interconnectedness, that transition will be catastrophic. The prognosis is terminal. Don’t be fooled by the fact that your system is large and powerful. Remember Nokia phones and Blackberries were once dominant too.

Of course, many paradigms come and go, and some look at microservices and scoff that it’s just a fad or that it deviates from good traditional design, but they’re wrong (Fowler, 2014). Like all good technologies, eventually its time in the sun will come to an end. But microservices are built with agility, scalability and resilience by design, so don’t expect that to be anytime soon. Even now, nascent technologies like serverless computing are maturing and vying for some space (Eric, et al., 2019), but it will take a lot to dislodge microservices. That’s why companies like Netflix, PayPal, Spotify & Twitter have dumped their monolithic services in favour of a microservice architecture. Maybe it’s time you did too, before it’s too late.

Bibliography
Eric, J. et al., 2019. A Berkeley View on Serverless Computing. [Online]
Available at: https://www.researchgate.net/publication/331034553_Cloud_Programming_Simplified_A_Berkeley_View_on_Serverless_Computing

Fowler, M., 2014. Microservices and the First Law of Distributed Objects. [Online]
Available at: https://martinfowler.com/articles/distributed-objects-microservices.html

Richardson, C., 2018. Pattern: Saga. [Online]
Available at: https://microservices.io/patterns/data/saga.html

Scheffer, M. et al., n.d. Anticipating Critical Transitions. [Online]
Available at: https://science.sciencemag.org/content/338/6105/344

Writer – Sean McLaughlin

Dying for Data

August 11th, 2022 by

How conventional EHR’s are contributing to Physician burnout and what can be done.

“Almost one third of Irish hospital doctors experienced burn-out, indicating suboptimal work conditions and environment”1

“50% of doctors reported being emotionally exhausted and overwhelmed by work”1

“The annual cost of physicians spending half of their time using EHRs is over $365 billion (a billion dollars a day) – more than the United States spends treating any major class of diseases and about equal to what the country spends on public primary and secondary education instruction.”2

“54% of physicians rate their morale as between somewhat or very negative”3

Physician burnout is real, and it is getting worse. In a 2019 Health Affairs blog, a group of top healthcare CEOs called physician burnout a “public health crisis”.4

In this blog, I do not want to dwell on the statistics, because all they do is substantiate what we already know. Where I think we need to focus our efforts now is not on the “if” burnout exists, but the “how”. It is worth noting that the first EMR was developed in 1972 by the Regenstreif Institute7 , with burnout among doctors first described just 2 years later, in 19748.

Physician’s needs are simple. Beyond those contained in Maslow’s hierarchy, physicians have the need to provide quality medical care, maintain autonomy, fulfil expectations, and build rapport with their patients. In current health systems, each one of these needs is systematically challenged on a daily basis, both by the EHR and other forces.

Outlined below, are just some of the ways the EHR contributes to physician burnout. This is my take on how change can be brought about for the better.

  1. Administrative Burden
    The earliest developments of EHR’s in the 1970’s were focused primarily on the administrative aspects of healthcare provision. Unfortunately, this ethos has remained engrained in many systems and is evident in the poor usability experienced by physicians. How can a system with billing, scheduling, registration etc. at its core, ever expect to provide the complex, bespoke features required by physicians? The simple truth is thus: Physicians are not administrators. Many EHR’s are built on the premise of administrative tasks. Asking a physician to mould their workflow into software fundamentally built for an entirely different professional field is preposterous, and causes a huge amount of understandable frustration.
  2. Constant Connectivity
    “Surprisingly, in a milieu where evidence is the key driver of patient treatment, the evidence on the relationship between workplace psychosocial environment and employee health is paid little attention by those who fund and manage healthcare organisations. It is buried under the constant refrain of ‘putting the patient first’ with little regard for those who are instrumental in providing care.” – Professor Blánaid Hayes, RCPI.

We know that physicians need to be able to switch off. “Resilience” workshops will tell physicians that their inability to do this is the major contributor to their burnout. At the same time, we are seeing conventional EHR’s evangelising the emergence of “connectivity” and the idea of “doctor in your pocket”. This constant connectivity to a physician means at some point in the day, every day, a physician needs to be contactable by, and as a result responsible for, their patients. While the vast majority of physicians continue to cite a high desire to practice medicine, a higher amount cite constant connectivity as a major contributor to their stress levels.

  1. Expectations
    Protocols, guidelines, “Dr. Google”, paperwork, research… the list goes on. Medicine has moved from an age where physicians were expected to heal some illnesses, to one where they are expected to correct every possible wrong in the life of contemporary patients. In his book “Can Medicine Be Cured?”5, Seamus O’Mahony eloquently outlines this expansion of expectations from physicians. EHR’s are sold to Healthcare Executives on the promised “increased productivity”. The translation of this is a workforce forced to adapt their workflow to a cumbersome technology that doesn’t follow their thought flow, while management updates “targets” and “deliverables” to align with the outcomes promised to them by the EHR vendor.
  2. Interactions
    It is ironic that systems which claim to increase time for Physician-Patient interactions are instrumental in reducing it. “There is broad agreement on the need for more face-to-face time between clinicians and patients and less time spent on the electronic health medical record and documentation”6 Through poor user experience, cumbersome workflows, and excessive data entry requirements, physicians are spending less and less time with their patients. How can a profession so strongly motivated by the desire to help patients benefit from less time with them? Physicians get their job satisfaction from interacting with humans and alleviating their suffering. Job satisfaction does not come from knowing that their hours of clinical coding will ensure accurate billing for the insurers.
  3. Conclusions
    Now, it is imperative to clarify that I am not implying for a second that conventional EHR’s should be treated as some sort of scapegoat for physician burnout. What we need to do is recognise that EHR’s play a major role and combat this. In an age when technology is constantly evolving to meet user needs, surely the optimisation of EHR’s is a low hanging fruit?

We need to implement systems that recognise the unique needs of physicians and their medical colleagues. We need to recognise that expecting a physician to use tools built for administrators is like asking your hairdresser to dry your hair with their appointment book! Systems that focus solely on the clinical needs of physicians, will be the ones that truly reduce administrative burden. These systems will empower, rather than oppress physicians, by providing solutions that fit their workflows and practices. Systems need to also recognise that the antiquated view of Physician = Physician = Physician no longer holds true. The IT needs of a cardiologist are going to be vastly different from those of a pathologist, and it is ignorant to suggest that they should both bend to fit a rigid system. There is also a fine line between “connectivity” and 24/7 responsibility. Recognising this, and allowing for it within the fabric of the IT system employed is key.

Taking steps to implement a clinically focused system is not going to end physician burnout. What it can do is show all members of the healthcare team that their needs are recognised, considered, and important. Beyond the needs of Maslow’s hierarchy, physicians just need to be allowed to be physicians. There is no reason healthcare IT systems cannot accommodate this.

References

  1. Doctors don’t Do-little: a national cross-sectional study of workplace well-being of hospital doctors in Ireland BMJ Vol. 9, Issue 3; Blánaid Hayes, Lucia Prihodova, Gillian Walsh, Frank Doyle, Sally Doherty
  2. 3 Ways to Make Electronic Health Records Less Time-Consuming for Physicians: Harvard Business Review January 10, 2019: Derek A. Hass, John D Halamka, Michael Suk
  3. Physician burnout in 2019, charted: Advisory Board January 18, 2019
  4. EHR Usability, Workflow Strategies for Reducing Physician Burnout; Kate Monica, EHR Intelligence
  5. Can Medicine Be Cured, Seamus O’Mahony
  6. New England Journal of Medicine (NEJM) Catalyst Spring 2018 report
  7. Healthcare, Extracting Data: A Brief History of the EMR: Extract; Chantel Soumis
  8. Burnout in Doctors, Irish Medical Journal, JFA Murphy

Writer – Dr. Sarah O’Reilly

Quality and Infocare

August 11th, 2022 by

In our fast-changing online world, customers expect a business to be more efficient, deliver value faster, better and with higher levels of quality and service.

In this blog we will zoom in on quality.

Soteria® promotes  quality healthcare and patient safety through its technology  to standardise the processes of care and to ensure that  accurate, up-to-date, reliable  information is available to clinicians where and when they need it most.

This is how Soteria® improves the quality of patient care:
– It has well-integrated modular business components.
– Information is managed in real time.
– It is the first intelligent clinical information system that works like clinicians think.
– It is highly configurable.
– it supports a variety of clinical workflows.
– It was designed and developed specifically with clinicians and healthcare in mind.
– It streamlines outdated systems and process flows.

At Infocare the quality of care is not only in the hands of our customers, the clinicians, who are using our products. But quality has also been embedded in our organisation in the way we build our products and the way we apply quality management in our internal processes.

But what exactly is quality?

One definition of quality is the standard of something as measured against other things of a similar kind; the degree of excellence of something (1).

Quality is often expected to be of a high standard. Our customers are using our products in order to improve their quality of care. That is the reason why we have asked ourselves an important question: ‘How can we deliver high quality products and prove that our way of working is adhering to the same high standards and expectations our customers have when it comes to quality?’

This all started by selecting and implementing an internationally recognised ISO 9001 quality management system (QMS). Becoming certified for ISO9001:2015 (2) was a huge step forward in maturing our business processes and helped us bringing in quality management into important areas of our internal management.

Infocare’s quality policy is based on achieving success through shared commitment and meeting or exceeding customer’s expectations through teamwork, continuous improvement and innovation whilst focusing on quality in everything we do throughout our organization. By identifying our top-level processes within the company, and then managing each of these discretely, we have managed to reduce the potential for nonconforming products or features that are discovered during final processes or after delivery. Instead, nonconformities and risks are identified in real time, by actions taken within each of the top-level processes.

We have documented all our internal processes using IT4IT Reference Architecture framework (3) which is an industry standard for managing the business of IT. This allows us to continuously optimise our business and development processes in order to further align business and IT to work together towards the same objectives and key results (OKRs) (4).

This has brought us to the place where we are today, deliver more value to our customers at a faster pace and still maintain a high level of quality.

References

  1. Oxford Dictionary 
  1. ISO9001:2015 Quality Management Systems 
    More information at: https://www.iso.org/standard/62085.html  
  1. The Open Group IT4IT Reference Architecture Framework 
    More information at: https://www.opengroup.org/it4it 
  1. John Doerr, 2018. Measure What Matters: How Google, Bono, and the Gates Foundation Rock the World with OKRs. 
    Available at: https://www.whatmatters.com/ 


Writer – Business Development Team

 

Will Nature or Nurture Win the Wearables Market?

August 11th, 2022 by

There is only one way, that does not involve finding a magic money tree, that the pressure on the NHS and Social Care can be eased in the long term, and that is by people living healthier lives and being accountable for their own healthcare.

The problem is that human nature has been overtaken by the mores of modern society and so people often take the easier, less healthy, less accountable and more convenient route.

There are an abundance of lifestyle apps and wearable technologies aimed at largely middle class, health conscious consumers. *By 2021, the market for wearable devices in the healthcare sector is projected to expand to more than US$ 17 billion from today’s US$ 2 billion.

Health management could be revolutionised by a broad range of people gaining the means to monitor their own health. As the population ages, it is ever more vital that they are enabled to take care of themselves instead of relying on an overburdened healthcare system. Living longer means having to retire later so staying healthy and active is imperative. Will those who need to manage their health most be wearing wearables? And will the right software and support be available to them?

How can the user base be expanded beyond those people who are willing to pay for the technology and have the motivation to monitor themselves against notional “targets”? Outside this traditional target group, studies have shown that initial progress is rapid, but soon tails off the “average” person soon reverts back to a more modern consumption-led lifestyle. Where studies have employed healthcare professionals to monitor and interact with groups, the progress of participants remains steady. But the same people, when left to their own “devices” (pun intended), regress. Why is this?

When faced with a healthcare professional, we defer to their knowledge whereas we don’t mind lying to a smart app or telling our phone “we will do the steps tomorrow”. It is not going to give you “that look” and we don’t establish a relationship with an app in the Cloud.

For wearables to have a real impact and to make a long-term difference to the general population, they must be linked with professionals, and provide rewards and have consequences that are measurable. In healthcare systems in the UK, the ratio of professional caregivers to patients is unsustainable, so we need to leverage technology to help professionals to identify and act upon exceptions and predict trends based on data,
leading to earlier and consistent interventions so that people don’t become patients.

The healthcare sector can and should take advantage of the innovations in the wearables market while proactively fostering links between the clinician, clinic or specialty.

We live in a society that, for the most part, takes care of its sick. But we are also capable of taking care of ourselves and keeping ourselves healthy by living more active lives. Wearable technology can provide the best of both worlds but only if we link healthcare professionals
at critical junctions to the people who wear the technology to make timely interventions possible. This is the tipping point, the point when they become known to healthcare or social care providers⎯before they become patients.

Reference
https://www.statista.com/statistics/607982/healthcare-wearable-device-revenueworldwide-projection/

Writer – Business Development Team

Why implementing IT4IT has been a good decision

August 11th, 2022 by

Delivering value as one team
With a lot of siloed processes and disparity within your departments and/or teams, there is an urgent need to align and connect all teams and have them constantly add value. If you believe this only affects large multinationals, you are wrong. This is also very apparent and visible in a lot of smaller companies and start-ups.
But how to align your business teams, product teams, financial teams and technical teams and have them all working as if they were one?

Align business and IT
In October 2015 the Open Group launched IT4IT Reference Architecture Standard, providing a vendor-neutral, technology agnostic and non-prescriptive, holistic guidance for the implementation of IT management capabilities for today’s digital enterprise. (1)
IT4IT is managing the business of IT and does not tell a business how to do things. Instead it complements, enhances and is linking structures together as the natural next step.

Value chain
The foundation of IT4IT is built upon Michael Porter’s value chain which is based upon the idea to look at an organisation as a system made up of subsystems each with inputs, transformation processes and outputs. (2)

The IT4IT value chain consists of four value streams:

  1. Strategy to Portfolio which is the bridge between business and IT and aligns new demands or enhanced conceptual services in one central product portfolio
  2. Requirement to Deploy which turns the conceptual service into a more logical service with more detailed requirements and results in building and testing a deployable service
  3. Request to Fulfill which has the task to transition the deployable service into production
  4. Detect to Correct which groups the activities around the business of IT operations and the services these teams deliver (like monitoring and remediation services)
Diagram: IT Value Chain

IT4IT implementation
At Infocare I am leading my second IT4IT implementation and I was able to use some valuable lessons learned from my previous experiences of implementing this powerful standard.
When implementing IT4IT we have taken a phased approach starting at the component level and identified which of the components are key to our operations and prioritised accordingly. Since all departments and teams will be affected with the work involved in implementing IT4IT, my advice is to take your time and plan wisely, there will be a lot of changes taking place when linking and optimising existing processes and frameworks. Important is to make sure there is absolute commitment from the top of the organisation in making this happen.

Diagram: IT4IT Reference Architecture standard level 1

The IT4IT Reference Architecture standard consists of the IT value chain and a three-layer reference architecture. Each of the layers is representing a lower level and more detailed description of the Functional Components. These provide more details around the components inputs and outputs, artefacts and the attributes for key data objects.
The first level is shown in above diagram and consists of the Functional Components (blue boxes) which act upon the Key Data Objects (Black circles). The Service Backbone Data Objects (Purple circles) show the relation between the three lifecycle phases of the service model (Conceptual service model, Logical service model and Realized service model).
Besides the standard IT4IT components, we have added some components related to Infocare specific processes, like Contract Management, Labor Management and Audit Management which complete the IT4IT supportive flows. For example, our ISO9001 Quality Management certification is mainly based on the IT4IT value streams and process descriptions.

Benefits

A lot of positive changes took place during the implementation of IT4IT:

• We now have full ownership of all Functional components
• Revisiting existing frameworks resulted in becoming more Vendor agnostic and tool agnostic
• We created more insight in managing continuous improvements across the organisation
• Our various teams work more closely together and started functioning as one team

We are looking forward to further implementing IT4IT, building out and maturing our agile organisation.

References
(1) The OpenGroup IT4IT Reference Architecture Framework
More information at: https://www.opengroup.org/it4it

(2) Michael Porter’s value chain
More information at: https://www.ifm.eng.cam.ac.uk/research/dstools/value-chain-/

Writer – Technical Development Team

Preventable Harm In Healthcare

August 11th, 2022 by

INTRODUCTION
“Primum non nocere”, the Latin phrase that means “First, do no harm” is part of the Hippocratic Oath and is a basis for ethics taught in medical school. Preventing harm, in the context of healthcare delivery, is of great importance to patient safety, overall quality and reducing the cost of care. While a goal of zero harm is desirable, this is not always possible given that healthcare provision is extremely complex. The focus of this article is on managing the risk of preventable harm. It is therefore important to develop a clear understanding of the nature of preventable harm. Once preventable harm is clearly defined, one can deal with this problem more efficiently.

DEFINITION OF PREVENTABLE HARM
There is no academic consensus of what constitutes preventable harm and no single definition is supported by the medical community as a whole. Most definitions (there are over 100) include that the harm should be attributable to an identifiable and modifiable cause and that the harm is preventable in future. Since a complete identification of identifiable harm is not the subject of this article, we will be using the definition of the Institute for Healthcare Improvement: ¹ “Unintended physical injury resulting from or contributed to by medical care (including the absence of medical treatment) that requires additional monitoring or hospitalization or results in death.”

EXTENT OF THE PROBLEM
The World Health Organization (WHO) estimates² that in high income countries as many as 10% of patients are harmed while receiving hospital care and, further, that 50% of these cases are preventable. In low/middle income countries the percentage of patients suffering harm is slightly lower at 8% but 83% of these incidents were preventable and around 8% fatal. The WHO further estimates that preventable harm to patients in care is one of the 10 leading causes of death worldwide. Some scholars estimate that 10 – 15% of healthcare costs in the United States can be attributed to the direct consequences of healthcare related patient harm.³ Already in 2012, preventable harm was estimated to cost the United States $19,5 Billion and malpractice insurance costed an average of $123 for every patient the hospital treats. Some experts calculate that in 2019 the results could be as much as 10 times more. ⁴ Reducing patient harm has been identified as one of the main areas needed to improve both outcomes and costs of healthcare. The US Department of Health and Human Services is forming partnerships with patient initiatives and has specifically targeted a reduction in preventable harm by 40% as one of its two key goals Partnership for Patients: Better Care, Lower Costs. ⁵ It is thus clear that addressing the issue of patient harm is not only of great importance to improving the standard and quality of healthcare, but also to reducing the overall costs and promoting access to such healthcare.

POSSIBLE FACTORS CAUSING PREVENTABLE HARM
It should be noted that the total elimination of preventable harm is not a realistic goal – healthcare providers are human beings and as such eligible to make mistakes. ⁶ Nevertheless, and as can be seen from the above statistics, the need to effectively manage and reduce preventable harm is acute. Various reports and papers have been written on the matter and most agree on the following main causes for preventable harm:
1. hospital acquired infections
2. surgical errors
3. medication errors
4. misdiagnosis

Most of these causes come about through communication errors ( between physicians, nurses, patients and other healthcare providers), insufficient information (may be lacking when care needs to be co-ordinated, prescriptions decided or results interpreted), patient related issues (insufficient patient education, inadequate patient assessment), staff problems ( staffing may be inadequate, staff may be overworked or not effectively trained) and technical issues (devices may fail, may not be operated properly or not maintained properly). Preventable harm is a result of a multitude of factors and often organisations attempt to blame individuals or a particular set of circumstances, failing to understand the complexity of the problem. This leads to the question of what hospitals can do to address the issue in complex, fast paced and sometimes chaotic circumstances. Healthcare providers may look at their processes, safeguards and methodologies and improve their technology, but real change will likely require some innovative thinking about the entire healthcare environment. One example of such thinking is the so-called “Swiss Cheese Model” of safeguards originally theorised by Dante Ortella and John Reason of the University of Manchester. ⁷ In terms of this model a good system has multiple layers of defence, each compensating for the weaknesses in other layers. Preventable Patient Harm occurs when the different layers share the same flaws (the holes in the cheese line up to go right through).

HOW CAN SOTERIA® HELP MANAGE PREVENTABLE HARM?
In a competitive world, hospitals have to keep their shareholder/stakeholders in mind. Unfortunately, the primary focus is often on shareholders and profitability, with patients of secondary importance and employees largely ignored. We would argue that this thought pattern should be reversed – if hospitals take excellent care of their employees, the employees will take excellent care of the patients who in turn will take care of the bottom line. In this spirit, Soteria® was developed as clinician-focussed software that not only lightens the administrative load on healthcare service providers and institutions, but also improves efficiency and provides several protective layers (helping to stop the hole from going through the cheese). It is an easy-to-use interface with an organized pathway of patient care and documentation that syncs patient data and provides instant access to doctor, nurse and administrator regardless of practice, facility or location. It provides the full clinical patient view, with simple, intuitive prompts that bring pertinent medical information to the point of care. Using Soteria®, healthcare providers can save time through its optimized medication lists and CPOE features. It is an efficient, easy-to-use interface rendering natural workflows. All information, tests, orders, care plans, guidelines and results are captured, coded, mapped, and saved and will integrate seamlessly across fragmented systems. A flexible reporting function and reliable audit trail provides a wealth of information to management to identify trends, pro-actively plan interventions, as well as measure the efficiency thereof and assist in the defence of possible malpractice lawsuits. The result: better physician support, improved patient outcomes and optimal clinical efficiency.

References:
¹ www.ihi.org
² https://www.who.int/features/factfiles/patient_safety/en/
³ Slawomirski L, Auraaen A, Klazinga N. The economics of patient safety.
Organisation for Economic Co-operation and Development, 2017
⁴ https://costsofcare.org/tallying-the-high-cost-of-preventable-harm/
⁵ http://www.healthcare.gov/compare/partnership-for-patients/index.html.
⁶ (To Err is Human: Building a Safer Health System, Institute of Medicine,
<.Dickenson, 2000)
⁷ https://en.wikipedia.org/wiki/Swiss_cheese_model

Writer – Willem Pretorius

The Long and winding road(map)

August 11th, 2022 by

The long and winding road
That leads to your door
Will never disappear
I’ve seen that road before
It always leads me here
Lead me to your door

John and Paul wrote this song (1) almost 50 years ago and I would like to show you that even in today’s world it’s still a long and winding road to deliver your product to your customers.

In one of my previous blogs about aligning business and IT using the IT4IT reference architecture (2), explained that a product roadmap is not just a one-dimensional single roadmap but one that consists of many roadmaps that should all lead to delivering your product to your customers ‘door’.

Chaos in a multi-dimensional roadmap
Each company has multiple roadmaps. Sometimes these are not properly managed or not even visible to senior management or decision makers. As an example, you could have a roadmap that is aligned with your product strategy but does not identify the individual needs from your internal teams (like architecture, IT operations and finance) or does not align with the sometimes-conflicting product needs from your different customers.

It is therefore important to align all these different roadmaps and initiatives with your strategic roadmap.

Be aware that all these individual teams and customers can and will bring in a different view on priorities and urgency, which will sometimes lead you away from that winding road that leads you to your customers “door”. That should make you believe that it is a complex task to keep everybody on the same road that brings you to that “door” and bring order in the chaos of a multi-dimensional roadmap.

A roadmap is not a backlog
A definition of a roadmap is that it is a strategic plan that defines a goal or desired outcome and includes the major steps or milestones needed to reach it. Besides that, it also serves as a communication tool, a high-level document that helps articulate strategic thinking and explains ‘the
“why”’ behind both the goal and the plan for getting there.

A backlog is essentially a to-do list of the tasks required to complete a strategic initiative with needs ranked according to priority. At Infocare, the backlog translates itself into a wide variety of projects that are created from the needs from the individual teams and product strategy.

In our company the roadmap always goes together with the project backlog in order to reach its high-level strategic goals. These goals, which are derived from our Objectives and Key Results OKR’s (3), translate directly into individual projects that can be assigned and tracked throughout their project life cycle.

Communication
Communication about the roadmap to your teams is an important factor in making your product a success and to be able to reach that customer “door” as quickly as possible.

It is not just sending a message across about which project is coming up next. By providing more insights into the different roadmaps and interdependencies and relationships between several initiatives and/or projects, your teams will show more engagement and will also play a crucial role in contributing and delivering input in relation to upcoming projects.

Along today’s ever-changing customer needs, rapidly changing technologies and regulatory requirements, it is important to make sure all your teams are aware of these changes.

Roadmap and strategy
A backlog with just a long list of projects does not bring a strong message across to your organisation. That is why we have chosen to use company OKR’s which we have further separated into 3 individual areas which are speaking more to the individual team needs and responsibilities. All our projects lead to reaching the goals which are stated in our company OKR’s and allow us to bring across a very transparent and clear view on the strategic direction of the company, products and individual teams.

Even when not deviating from your strategic goals and vision, your individual projects can get a higher or lower ranking within the project backlog based on the short term need that arises. Change will happen, whether you like it or not. Therefore, it is very important to be agile in managing your roadmap and your ability to bring in changes within very short notice. Agility together with the proper communication around those changes will make your teams much more open to change and adapt quickly when new needs have been identified and changes to planning have been introduced, without losing sight of the strategic goals.

Our road will never disappear and will clearly show the way behind the company’s strategic reasoning that will lead us to our customers door.

References
(1) The Beatles, Paul McCartney and John Lennon song lyrics ‘The long and winding road’
Available at: https://www.azlyrics.com/lyrics/paulmccartney/thelongandwindingroad.html

(2) The Open Group IT4IT Reference Architecture Framework
More information at: https://www.opengroup.org/it4it

(3) John Doerr, 2018. Measure What Matters: How Google, Bono, and the Gates Foundation Rock
the World with OKRs.
Available at: https://www.whatmatters.com/

Writer – Business Development Team

System Usability – Giants, Unicorns, Batman and the Joker

August 11th, 2022 by

When designing systems, and making products usable, it is important that we see the end from the beginning.The clearer our vision of where we are going, the better we are able to plan the journey.

You need to understand the problem you are trying to solve. A system in itself is not the end, but it is a means to an end, a way to solve a problem.

When we find more efficient systems (more efficient ways of accomplishing tasks), then the old systems become obsolete and irrelevant (Like MySpace did when Facebook showed up).

Let’s take a look at the evolution of transport through the ages…

The first stage of transportation for man was by foot. The next stage did not occur until some smart alec got around to taming animals and figuring out that some of them could actually be ridden and assist with carrying heavy loads. This in turn enabled humans to travel further and complete tasks they would otherwise not have been able to do.

The next important discovery was the wheel. Combining the two concepts vastly improved man’s capability.

With the advent of the motor vehicle, the use of animals as a primary mode of transport eventually became obsolete.

It is a silly analogy, but the point is that it was not a sudden thing that happened, it was an accumulation of knowledge and advancement that had been built over many centuries and had been married in one amazing machine. The discovery of the wheel, fire, steel, the laws of motion, gravity, these all contributed to something never before seen that is now an accepted norm.

That; is the heart of technological advancement. It is a constant refining and leveraging of accumulated knowledge and understanding what is possible. It is discarding what is no longer necessary or cumbersome and improving that which is more efficient.

It is dissecting what has gone before and getting rid of the crud to produce something better, or different, or new.

The better your level of understanding, the more usable your system will be.

You can see better when you stand on the shoulders of giants, but beware of the unicorns
When we speak about system usability; all the tech, the design, and everything that goes into it; is there to perform a task. A successful system accomplishes this task more efficiently than those that went before. Standing on the shoulders of others who have gone before, it solves problems in a way that disrupts the status quo.

While there are definitely giants in the industry that we can lean on to sharpen our understanding, there are also unicorns. Unicorns are mythical creatures we run after to define our processes but at the end of the day we end up chasing the wind. We waste millions in product development and time that could have given us the edge if we were not otherwise occupied.

As far as roles within companies are concerned, I will speak about the unicorn I know the best, and that is the UX Designer, which is what I thought I was doing at some point.

For those who don’t know; UX stands for User Experience. Here is an excerpt from the book Making Meaning:
How Successful Businesses Deliver Meaningful Customer Experiences. (1) By Steve Diller, Nathan Shedroff, Darrel Rhea (2005).

Experience design is not driven by a single design discipline. Instead, it requires a cross-discipline perspective that considers multiple aspects of the brand/ business/ environment/ experience from product, packaging and retail environment to the clothing and attitude of employees. Experience design seeks to develop the experience of a product, service, or event along any or all of the following dimensions:

● Duration (Initiation, Immersion, Conclusion, and Continuation)
● Intensity (Reflex, Habit, Engagement)
● Breadth (Products, Services, Brands, Nomenclatures, Channels/Environment/Promotion, and Price)
● Interaction (Passive < > Active < > Interactive)
● Triggers (All Human Senses, Concepts, and Symbols)
● Significance (Meaning, Status, Emotion, Price, and Function)

In short, User Experience Design does not belong to one person. People with that broad spectrum of knowledge across multiple disciplines are few and far between, and when you find a person like that they will probably work alone because everyone is just too incompetent to get what they are saying. These are people like Linus Torvalds, Nicolas Tesla and Leonardo Da Vinci, great visionaries, big loners. 2 of them are long dead, and one is still awesome. You read about them in books, but the chances are slim you will ever meet them, and if you do, they will not work for you, for very long at least.

You get very talented user interface designers who can have a powerful impact on the usability of the system, but at best they can offer an intuitive visual experience and based on solid design principals and a deep understanding of user habits. A good UX team comprises of seasoned Business Analysts, Systems Analysts, System Architects, UI Designers, Backend and Front-End Developers, copy writers and scrum masters, among others, who all work together to optimize the user experience. No one person can take that role.

A better role for user-experience would be a UX Facilitator, someone who can tap into your workforce and steer them in a direction where there is a unified vision. Someone that will keep them on track, that will value their skills, that will make something magical. It is a philosophy that needs to be maintained, not a role that needs to be filled. The chances are you have giants working for you, they just need to be inspired in the right direction.

Job descriptions such as UX Designers and Full stack Developers are not the only unicorns.

A unicorn can be buzzwords and acronyms that, instead of making things easier for us to understand, they obfuscate what we are trying to achieve.

It could be writing endless pages of documentation that take up endless man hours and get read by hardly anyone.

It can be trends in the industry, hyped up software implementations that quickly fade or fizzle away leaving you spending time on tech and development that is no longer relevant. There is a simple rule to avoid running after unicorns; know what you are getting yourself into and don’t do things unless it contributes to the problem you are solving.

Don’t get swept up by the industry buzz, think for yourself. Unicorns may be imaginary creatures, but they will leach you dry and can end up sinking your business.

If you want to be disruptive you need to be a trendsetter, not another sheep in the sea of software companies.

Sustainable is not disruptive
Progress means change. If you want a successful tech stack, then you need to be able to take an honest look in the mirror and do what it takes to offer the best product you can.

We all know the story about ostriches sticking their heads in the ground whenever they smell fear. Fear causes blindness and it will leave you vulnerable.

In The innovator’s dilemma: when new technologies cause great firms to fail (2), Christensen lists two main ideologies that companies operate in:

Sustaining
An innovation that does not significantly affect existing markets. It may be either:

Evolutionary
An innovation that improves a product in an existing market in ways that customers are expecting (e.g., fuel injection for gasoline engines, which displaced carburetors.)

Revolutionary (discontinuous, radical)
An innovation that is unexpected, but nevertheless does not affect existing markets (e.g., the first automobiles in the late 19th century, which were expensive luxury items, and as such very few were sold)

Disruptive
An innovation that creates a new market by providing a different set of values, which ultimately (and unexpectedly) overtakes an existing market (e.g., the lower-priced, affordable Model T Ford, which displaced horse-drawn carriages)

At the speed of innovation today and resources from tech giants such as Google, Apple and Microsoft who are throwing Billions of Dollars at monopolizing the software industry, only being disruptive will allow smaller tech companies to beat them at their own game. If it exists, is efficient, and is widely adopted, then you are too late. You need a better idea. What will make people come to you? How will you inspire your clients?

Make no mistake, software giants are master disruptors, but we have to learn from them and harness the wonderful toys they give us so that we can beat them at their own game. They are the giants; we must be the giant slayers.

I love this quote from Batman when he fights the joker:
Batman: Excuse me. You ever danced with the devil in the pale moonlight? [punches Joker
and knocks him against a bell, before grabbing him] I’m going to kill you.
Joker: You… IDIOT!!! You made me, remember? You dropped me into that vat of chemicals.
That wasn’t easy to get over, and don’t think that I didn’t try!
Batman: [smirks] I know you did.
[Batman punches Joker in the stomach and knocks him through a wall. He grabs him and
helps him up only to punch him in the face again. Joker stands up, muttering and clutching his
mouth until he spits out a chattering teeth toy. He retaliates by punching Batman in the
stomach, only to break his fingers on the body armor]
Batman: You killed my parents.
Joker: What? [spits blood on the floor] What are you talking about?
Batman: I made you; you made me first.

I would like to think that we can be like Batman, who grew up as a scared kid in a dark city and became a contender who put dread in those who formerly ruined his city.

Google is not evil(sic), neither is Apple, Microsoft, or any of the others, but if we don’t beat them then they will swallow our future. Currently at Infocare we harness some of Apple’s, Google’s and other tech to drive our own systems, but we are quite aware of who we are and what we want to achieve.

So, what about system usability?
The bottom line is that system usability is a multi-disciplinary pursuit. As such, the system needs to be defined by strong leadership. The success of the system’s usability will be determined by your ability to harness the different experts you have hired and get them to work to a common clearly defined vision.

Most people in the tech industry go forgotten, but everyone knows who Bill Gates is, who Steve Jobs is and who Thomas Edison was. They are disruptive innovators who knew how to harness the magic in others to build something the world has not seen before, and now they will go down the annals of history.

We need to harness in understanding like a net, we must find clarity of vision and purpose. What happens in your company and in the mind of the people that are driving the product, will be reflected in the system that is produced.

Where there is no clarity, any software system will end up being a disjointed Frankenstein.

An insightful leader will impart a clear vision to his capable team, and the systems they produce will turn an industry on its head.

References
(1) Making Meaning: How Successful Businesses Deliver Meaningful Customer Experiences.
By Steve Diller, Nathan Shedroff, Darrel Rhea (2005).

(2) The innovator’s dilemma: when new technologies cause great firms to fail.
By M. Christensen (1997).

Writer – Technical Development Team

Performance and Reliability in Healthcare Systems

August 8th, 2022 by

Medicine has been around for thousands of years and throughout the vast majority of this time, it has gotten on fine without computers. Even today, more medical practices use good old-fashioned pen and paper than electronic systems. Even though there are many important and potentially life saving advantages to using these systems, users will only adopt them if they can see the benefits. The greatest impediments to this are poor performance and unreliability.

At first glance, this should be obvious. If a system is unreliable, users will not want to use it. If a system is unavailable when they need it, the very nature of medical care will require them to revert to tried and trusted pen and paper. Speaking of trust, the less reliable a system is, the more likely users are to lose confidence in it, feeding into the normally strong desire to return to the ‘old way of doing things.’ This is not human fallacy or laziness. A physician in an emergency situation is not going to resort to a tool they don’t trust and a tool that is untrustworthy and unreliable has no place in an area as serious as medical care. Consider the hypothetical scenario where you are lying twitching on a hospital bed, in anaphylactic shock, unable to speak, to inform the doctor that you suspect there might have been peanuts in your lunch. Do you want your potential saviour reading your allergies list off a computer that keeps crashing or off a clean sheet of paper pulled from a file?

Performance is equally important. If an electronic system is slower than the old way of doing things, users will not embrace it. If it takes two minutes to pull down your patient list when you can just read it off a white board, you will be forgiven for putting down the iPad. More seriously, in time critical scenarios, it is simply dangerous to force a physician or patient to wait for a slow system to respond.

All this is a long way of stating that, if healthcare systems are to provide real benefits to patients and medical practitioners, then at the very least it must be more reliable and performant than the systems they are replacing. Otherwise, in the bestcase scenario, it will not be accepted. In the worst case, it can cost lives.

So how then, in Infocare, do we reach this important standard? Firstly, there are hardware considerations. The brick and pipes that make up the physical system are critical to a good quality framework and merit a blog of their own. Today, we will discuss the software. Well designed healthcare software will follow some basic rules to ensure it is always available and will reply promptly when requested.

Rule one of reliability is to ensure errors are handled correctly. It is a fact of life, that things will inevitably go wrong. No system has ever been built that can 100% guarantee that nothing bad will ever happen. A user might enter bogus data, or accidentally abort an important process, or misuse the interface. A network connection might die, or a third-party service could go down. In every case, these errors and glitches should not be allowed to take down everything. In digital systems, the default behaviour upon encountering something anomalous, is to shut down, often reporting a bewildering error to the user. But a properly designed framework will capture these errors, report something useful to the user and attempt to either recover, or else protect the rest of the system from falling over.

Take for example, the infamous HTTP 500 error, often the last thing seen by users of a buggy website before they vow to never visit it again, while the owners desperately try to reboot the server before other people notice something has gone wrong. A reliable system will capture the fault, recover the service and return a useful, friendly message to the user. It will attempt to fulfill the request, or any part of it that it can and also alert the administrator to the problem so it can be investigated promptly. In the image below, see how a popular website like reddit handles such scenarios.

Compare this to the default behaviour of less well-designed websites:

Clever systems will attempt to anticipate errors before they happen and head them off. We call this defensive programming. If, for example, a patient’s date-of-birth is in the future, or their social security number contains letters, we can add code to validate these inputs and request the correct information before it enters the system and potentially cause more serious issues. We can check that data exists, is valid and is consistent, before processing it, thus avoiding what is known to programmers as null pointer exceptions, the cause of so many server crashes. The more failure scenarios we can predict, the more we can prevent from tripping up the system.

In Infocare, we make sure to validate data from users and from third parties before processing it. We check for the availability of services before calling them and we attempt to anticipate failures before they happen. In the rare cases where errors do occur, we handle them gracefully, contain the effects, log it and provide meaningful feedback to the user. As well as being reliable, healthcare systems must perform well. In this area, there are a few good practices that we follow. The golden rule of performance is ‘if you don’t need it, don’t fetch it’. If a physician needs a patient’s insurance information, don’t bring their entire medical record along as well. If they need an address, they just get that. Systems often grind to a halt under the strain of trying to retrieve and transport too much information, much of which is not even needed.

The second, rather complementary rule of performance is ‘Don’t fetch it more than once’. This is better known in software development as caching. There is no sense in constantly requesting information that doesn’t change. A patient’s insurance or home address is unlikely to change during the course of a consultation, so once we have got it, there is no need to keep asking the server for it every time the user moves around the application.

At Infocare, we endeavour to ensure that only the information the user needs is fetched and processed, and nothing else. We identify data that only needs to be fetched once and cache it. In doing so we can be certain that our products will respond as quickly as possible and not add unnecessary load to your network.

A good healthcare system is always available when you need it and responds promptly when queried. Sticking to the guidelines described here has helped us produce software that users want to use and extract real value from. That is a win for us, our customers and their patients. Now that really is a good performance.

Writer – Technical Development Team

PERFORMANCE AND RELIABILITY OF HEALTHCARE SYSTEMS (TECHNICAL)

August 8th, 2022 by

Technology today is offering a wealth of opportunities for the care industry, but with that also comes a depth of challenges. Not least the ever-moving target of new technologies adapted by vendors, employees and other care providers. For critical medical devices we can still turn to familiar certifications for standards of quality. A well established and extensive range of ISO standards will not only provide the quality levels necessary, but go into detail on how to achieve it, for example device calibration, auditing and testing, while at the same time pulling in national standards, such as ANSI patient safety standards in the U.S. or OHSAS in the U.K.

The HIPAA acts are a relatively new addition, (since the mid-1990s) and introduce a change in direction towards the protection of data in its logical format, as against the traditional point of view of managing systems. This works well in the current environment, where technologies are changing faster than advisory groups can process requests to update standards. Technology is becoming increasingly integrated, physical components are now typically virtual, and processor microcode is becoming software programs on generic commodity hardware.

The upshot of this is the current approach in the industry is data centric and is starting to turn away from the layered provision model that evolved technology into what it is today.

Now data is viewed as the content that the systems are built around, to the extent that data that isn’t attributed to the User, is seen as meta data and another wrapper around the payload. Relevant data is tied to the business use case, and the core reason why the systems are built.

Where the “User” is a medical patient at a clinic, HIPAA’s orientation around Protected Health Information works well in this context.

At Infocare project planning takes an approach of HIPAA compliance from inception, the team are educated on HIPAA HITECH and Privacy Rule across the product workflow. Processes are ISO 9001 and 27001 compliant end to end from the creative Agile teams in Development across to ITIL controlled production deployments.

Just like other enacted regulations (such as Sarbanes–Oxley etc.), the focus in HIPAA for I.T. is on data security. For compliant companies such as Infocare, that security is well documented, from the technical implementations to the work practices, however, standard medical regulations do not address performance and reliability in any meaningful way.

In clinics across the world, administrators have the same experience of “the system” on a go slow or “hanging for a minute”. It’s not tied to any particular vendor or application, and the possible source or cause of the issue is wide ranging. It could be the user’s computer, the office wi-fi or “server side”, to name just a few of the many plausible explanations.

Not only does this affecting work momentum but there’s the bigger questions, like how safe is the system at the back end from to the next power outage or storm, how long will it take to be brought back into operation or what will be lost by any incident.

In 2019 an internet backbone outage in the U.S. caused 911 services to go down for two days. Amongst many other unrelated critical outages that year, all Facebook services were down for a day, and global Google e-mail was down for half a day.

For Infocare the challenge is to figure out how to make security and performance work together from the systems design stage onwards throughout the product lifecycle, bearing in mind a high level of security & compliance is the first stipulation. Developers are constantly “smart thinking” new ways to make the product run smoother, faster, and improve the user experience. The product then undergoes extensive testing in the client’s environment. But even with this level of confidence in-house, Infocare still need to manage the integrated environment and take into account the impact external systems will have on the product, for example fluctuating quality in a clinic’s internet connection or a personal computer with performance issues.

A suite of monitoring tools with visual dashboards are the eyes and ears for the backend engineering team. Everything from hard disks to networks to software components are monitored, creating an extensive range of monitored end points that are then categorized on the services affected and if the error is critical enough to page the on-call engineers.

Predictive analysis in the system assesses if an outage is expected, for example if the volume on a network pipe is higher than typical and growing. Intelligent infrastructure tools are used to repair failing services by automatically selecting the appropriate action, in what is described as a “self-healing” network. System performance is also monitored for example new service instances can be spun up during high demand, while at the same time latency and other potential issues are monitored across the network.

Infocare architecture is “Disaster Recovery” tested for fail-over in a multi-site environment, and a combination of high availability solutions act as a multiplier against outages, where services can reside in multiple locations. In the meantime, the architecture is designed to make the user experience seamless during outages, for example traffic is switched over to alternative internet connections or new service instances spun up to scale with demand. This visibility over the end to end system needs to work not just for the backroom experts, but also from the perspective of the end user. For example, if a User action fails, the root causes needs to be pin pointed.

Reverse engineering can be used where there is not 100% visibility, for example test tools measure the application performance from several locations, but onsite staff and remove support can analyze the end to end performance from local I.T. equipment and bandwidth etc. upstream as far as the hosted services. So, what can hospitals and clinics do to address performance & reliability today?
1. Review Service Level Agreements with I.T. vendors. Look for performance metrics and uptime.
2. What is the staff’s experience with the vendor’s product and quality of support? Does the vendor take ownership and go beyond their remit to exceed at managing the customer experience?
3. Does the vendor show end to end connectivity: do the support staff seem trained and empowered, and are the account managers able to speak confidently about the technology and how its managed?

These points will build up a picture of the quality of the service that will be provided.

Writer – Cormac Trant