Tuesday 21 March 2017

David Haselwood | Global Animal Vaccines Market to witness Impressive Growth by 2016 – 2024

Zion Market Research, the market research group announced the analysis report titled “Animal Vaccines Market: Industry Perspective, Comprehensive Analysis and Forecast, 2015 – 2021

Vaccination has been one of the most important conciliations in disease prevention that has ever been developed.The effectiveness of vaccination is seen by the reduction in disease.Animal vaccines provide protection against various diseases such as rabies, feline panleukopenia, feline viral rhinotracheitis, feline calicivirus infection, canine distemper, canine parvovirus infection, and canine hepatitis. Hence, most of the animal doctors recommend vaccines depending on the type and severity of the disease.

Animal vaccines can be made available without large, controlled dispute studies that are mandatory prior to the release of human vaccines. Animal vaccines are controlled by the US Department of Agriculture, which hardly requires the vaccines to be shown to be safe and pure, and have a “reasonable expectation” of efficacy prior to their release.Nevertheless, the clinical relevancy or applicability of a particular vaccine may not necessarily be assured by the licensing process. This fact makes it easier for the manufacturers to provide animal vaccines and meet the rising demand in the global market.
Based on the type of vaccine, the global animal vaccines market is segmented as inactivated vaccines, toxoid vaccines, live attenuated vaccines, subunit vaccines, and others. The others segment is further sub-segmented as conjugate vaccines, recombinant vaccines, and DNA vaccines. Of which, the DNA vaccines segment is anticipated to account for the highest growth in the near future. On the basis of disease category, the global market is segmented as anthrax vaccines, rabies vaccine, brucellosis vaccine, DA2PPC vaccine, clostridium vaccine, and others. Based on the product type, the market is further segmented as companion animal, porcine, livestock, equine, poultry, aquaculture, and other animal vaccines. Owing to the rising number of pet owners worldwide, the companion animal segment proves to be largest and the fastest growing segment in the global market.
The growth of global animal vaccines market is expected to boost owing to the increase in the livestock population and frequent occurrence of livestock diseases. In addition, factors such as rising prevalence of zoonotic diseases, various initiatives were taken up by different government agencies and animal associations, and the introduction of new vaccine types also impact the growth of the market in a positive way. Moreover, technological innovations, growing awareness regarding animal health in the emerging countries, etc. are some other factors boosting the market growth. Read More.......

Monday 20 March 2017

David Haselwood | New Role to Develop International MedTech Partnerships


Developing more international partnerships and investor opportunities for medical technology is the focus of Dr Diana Siews new role as strategic partnership specialist for the University of Aucklands bioengineering institute.Media Release – University of Auckland – 20 March 2017
New role to develop international MedTech partnerships

Developing more international partnerships and investor opportunities for medical technology is the focus of Dr Diana Siew’s new role as strategic partnership specialist for the University of Auckland’s bioengineering institute.
Her role with the Auckland Bioengineering Institute (ABI ) will contribute to growth of the MedTech Centre of Research Excellence (MedTech CoRE) and the Consortium for Medical Device Technologies (CMDT).
She has a strong innovation, research management and relationship management background in New Zealand’s medical technology sector.
Dr Siew will retain her role as co-chair of the CMDT that sits alongside the MedTech CoRE. She is also an Associate Director for the MedTech CoRE, responsible for strategic partnerships and seed funding.
Dr Siew is an alumna of the University of Auckland with a doctorate in Chemistry and many years’ experience in New Zealand’s medtech environment, including past roles with Industrial Research Ltd and Callaghan Innovation.
“My new focus will be working alongside the ABI to progress the MedTech CoRE and CMDT,” she says. “Five years ago, Professor Peter Hunter and I co-founded the CMDT to reduce the isolation of medical technology research institutions around the country.”
“Feedback from multi-nationals then was that they found it hard to work in New Zealand with its large number of different research organisations in the medical health technology space,” she says. “They sometimes didn’t know where to start to find all the people for a particular focus.”
“We developed the CMDT as a national network to highlight New Zealand’s medtech activity and connect companies, the research industry, health providers and government stakeholders,” she says. “It’s the NZ Inc front for medtech research in this country and makes it easier for multi-national companies to work here.”
The CMDT is led by a partnership of the University of Auckland with the universities of Canterbury, Otago, AUT, Victoria University of Wellington and Callaghan Innovation.
“It sits alongside the MedTech CoRE which is the translational research pipeline of new technologies for the medtech sector,” says Dr Siew. “We now have a high level of trust in the network and transparency between the partners,” says Dr Siew.
Earlier this month, the CMDT partners hosted a workshop for a group of Japanese researchers, companies and funders to support a collaboration between the two countries, focussed on developing new technologies for elderly care.
Another of Dr Siew’s achievements while at Callaghan Innovation was founding the Standing Trial Population Centres that support fast early-stage validation studies of medical devices and digital health systems to accelerate technology development for both health and economic outcomes.
“This platform accelerates the ability of a medtech company to get quick validation for prototypes and concepts that they are working on,” she says. “This reduces the time and expense in identifying clinical expertise and recruiting patients.”
It is an easy access tool for multi-nationals to see the four main areas where the Standing Trials Population Centres operate – in technologies for elderly care, rehabilitation innovation, and remote community care, and design and development for new devices.
Waikato District Health Board’s Institute of Healthy Ageing and AUT are key partners to two of the Standing Trials Populations Centres.
Another initiative developed by Dr Siew for medtech in New Zealand, is a showcase on the latest technologies available in New Zealand. Read More....

Wednesday 15 March 2017

David Haselwood | Artificial Intelligence in Health Care Delivery: Where Might it Take Us, and What Happens if We Get There?

It is difficult to avoid the specter of “artificial intelligence” (AI) these days, and for those working in the health care sector, there is no exception. Health care delivery has been impacted by a variety of tools that use some form of AI for many years. Further, recent advances in hardware and computer science development have increased the likelihood of even more significant impact. Today, some health diagnostic tools using advanced AI systems in research settings perform as well as their human counterparts, and sometimes better; in the future, performance will continue to improve, and the scope of activities subject to this kind of automation will likely increase.

Currently, advanced AI systems are being used in health care delivery settings in very discrete ways, and are designed to provide practitioners with more and better information with which to make decisions. The function of the practitioner remains unchanged and familiar—the practitioner is the final authority with respect to the provision of diagnostic and treatment service. The practitioner’s judgment remains paramount.

It is, therefore, easy to be lulled into a false sense of security regarding the application of legal and regulatory standards with what seems to be nothing more than the addition of another tool in a practitioner’s bag. There are, however, reasons to take a more critical view and to consider what the future may hold—because the future and the expanded potential of AI systems in health care deliver is likely not as far away as we might think.

Whose Judgment?

Any professional is held to a standard of care that, at its most fundamental level, recognizes that the professional will exercise his or her judgment in the performance of the profession. That judgment is exercised in the context of past and ongoing learning, training, experience and the utilization of existing tools that assist the professional in exercising that judgment. The professional takes the facts and circumstances and weighs various conclusions regarding a course of action. In this context, AI tools can be extremely beneficial—particularly in the health care context—as they can speed analysis, expand the knowledge base of the provider and speed the review of vast amounts of data.

For liability and licensure purposes, however, the practitioner must never lose sight of his or her own responsibility and always exercise independent judgment. The practitioner must not delegate to the AI system the essential function of being a licensed professional and making the final call. While this may be a simple concept to express, as AI system functionality continues to improve, and expand on their clinical diagnostic and even treatment plan capabilities, it may be a harder concept to implement as time goes on.

Experience indicates that technology will continue in its development as a ubiquitous tool. We accept information technology into our professional and personal lives with ease. Studies indicate that younger generations adopt technology with ease and confidence; and demand that these technologies be made available to them in a variety of contexts. It appears that, unless there is a law against it—and even if there is—someone is going to build an app, and people will use it. The health care sector is not immune to this trend, even though the significant regulatory environment makes rapid and systemically valuable adoption difficult. This pressure for adoption will only increase as AI systems continue to develop, improve and demonstrate their effectiveness in the health care delivery setting.

Clearly, there is nothing wrong with relying on proven technology; but at what point do we, as a society, accept that proven technology can replace the judgment of a licensed professional? If an AI system proves to be more effective and reliable than a human physician at a certain function, then should we not rely on the AI system’s judgment?

Oversight and Standards of Care

Regulation is largely about allocating responsibility among actors, and ensuring that certain actors have the requisite skills, knowledge, assets, qualifications or other protections in place given the nature of what they are doing. We regulate health care practitioners, financial institutions, insurers, lawyers, automobile salesmen, private investigators and others because we believe, as a society, that these actors—human or corporate—in exercising their judgment should be held to heightened standards. Accordingly, not only are these actors subject to potentially more exacting standards of care, but also they frequently must be licensed, demonstrate a certain financial stability or otherwise prove a degree of trustworthiness.

Similarly, we hold products to a more exacting standard. In health care in particular, not only do we require medical devices to prove their efficacy and safety, but we also require that their manufacture adhere to certain quality standards. Further, certain products may be held to a standard of “strict liability” if they do not function properly. Accordingly, the developers or manufacturers of these products face significant liability for the failures of their products.

A health care practitioner’s standard of care is an evolving standard, and one that does not exclude the appropriate utilization of technology in the care setting—indeed, it may eventually require it if the technology establishes itself in common usage. It is possible to foresee advanced, judgment-rendering AI systems integrated into the care setting. The question we must ask is whether our existing legal and regulatory tools provide an appropriate and effective environment in which these tools are deployed.

Allocating Responsibility

At some point, under some circumstances, AI systems will start to look more like a practitioner than a device—they will be capable of, and we will expect them to, render judgments in a health care context. The AI system will analyze symptoms, data and medical histories and determine treatment plans. In such a circumstance, is it fair to burden the treating practitioner with the liability for the patient’s care that is determined by the AI system? Are existing “product liability” standards appropriate for such AI systems?

This latter question is relevant given the “black box” nature of advanced AI systems. The so-called AI “black box” references the difficulty or inability to access the workings of the AI system, as we may otherwise do with other software systems. The reason for this is the nature of some of these AI systems, which are frequently systems that utilize neural networks. In essence, neural networks are large layers of interconnected nodes. The nodes are subject to generally fairly simple functions, but by inputting a great deal of information and “training” the network, these relative unsophisticated interconnected layers of nodes can produce remarkably sophisticated results.

While these neural networks can produce excellent results, their “reasons” for coming up with a conclusion are not easily discernable. In a sense, these new AI systems become functional and effective in a manner similar to the manner a human does. We learn, for example, what a dog is by seeing dogs and being told that these are dogs. The same is true of neural network AI systems. While we may learn later that dogs share common features and are categorized in certain ways, we recognize dogs on the basis of experience; and the same is true for AI systems. Unpacking why an AI system misidentifies a cat as a dog, accordingly, can be very difficult—in essence it is an exercise in understanding why a neural network rendered a judgment.

In this context, it is fair to ask whether a judgment-rendering AI system in any sector should be held to the same standard as other products. We may want to consider any number of factors and actors when determining how to allocate responsibility. We may want to allocate responsibility among the practitioner, the developer of the AI system, the “trainer” of the AI system, the “maintainer” of the AI system and, perhaps, the AI system itself.

How to Regulate: Start Asking the Questions

Achieving a reasonable approach to effectively regulate new dynamics in health care delivery will require thinking carefully about how best to regulate AI systems and care delivery as this technology continues to advance and become capable of taking over certain “professional” functions. A number of factors must be taken into consideration.

1. The existing oversight regime: Right now, the US Food and Drug Administration (FDA) regulates the manufacture and distribution of medical devices, including software, intended to be utilized for the diagnosis, treatment, cure of prevention of disease or other condition. FDA approval or clearance of a medical device does not necessarily limit physician utilization of a product for off-label purposes. State medical boards regulate the practice of medicine, assuring that the physician community is adhering to appropriate standards of care. Both of these regulatory bodies will need to review their approaches to regulation in order to effectively integrate AI systems into care delivery.

2. The ubiquity of AI in the IoT: While some AI systems may be discrete pieces of system machinery, it would be a mistake to ignore AI penetration into the “internet of things” (IoT) and the larger digital landscape, and the related penetration of the IoT into health care through patient engagement, data tracking, wellness and preventative care tools and communication tools. In short, we need to recognize that increasingly sophisticated levels of “health care” are increasingly taking place away from hospitals, doctors’ offices and other traditional care settings, and not under the direct supervision of health care professionals.

Directly related to this, of course, is the utilization and maintenance of data. AI health care tools integrated within the IoT will likely be privy to massive amounts of data—but under what approval and subject to what restrictions? Frequently, even the most isolated AI tools in health care rely on massive data sets. Accordingly, data privacy and security issues will increase in importance and consideration of how the existing privacy and security regulatory regime applies to advanced AI systems will necessitate a forward-thinking approach.

3. Liability and insurance: Given the black box nature of AI systems and the unique ways they are developed to perform their functions, determining liability becomes complicated. As noted above, different actors, performing different functions come into play, and the role and function of the health care practitioner may begin to change given the nature of some of these AI systems. The practitioner may take on responsibility for AI system training or testing, for example. How liability should be allocated in such a complex environment is a difficult question, and one that will likely evolve over time.

Further, the standards for liability may need to be reconsidered, and the standards of care for the delivery of care may need to undergo radical transformation in the future if AI systems prove themselves able to function at a higher level of quality than their human counterparts. As the interaction between human physician and AI systems evolve, the standard of care can easily become quite complex and confusing.

4. The robot doctor: Political and legal theorists are already seriously contemplating imbuing AI systems with legal attributes of personhood. While fully sentient and sapient robots may be far off in the future, legal rights and responsibilities do not require such features. For example, we provide corporate bodies and animals with legal rights and responsibilities. We also discriminate among different groups of “people” (e.g., between citizens and non-citizens; between pets and wild animals). In fact, the notion of rights and responsibilities for an AI system may assist in designing an appropriate regulatory environment.

Similarly, we may borrow from human-based liability standards to evaluate whether an AI system caused event is actionable. Given the manner in which neural networks are trained and their black box nature, a “reasonable robot” standard of care may become an effective way in determining whether a wrong has occurred.

The Future of Health Care Professionals

We have not yet reached the age of the machine, and our health care is still best served by rich engagement between patient and well-educated, trained and equipped health care professionals. Health care professionals have the opportunity to shape the way AI systems can best be used in care delivery, and also to shape the way future AI systems can best be utilized in the future as they continue to improve and evolve.