Consortium News

  • 21 May 2015 4:25 PM | Brian Kelley

    retrieved from Boston Business Journal  
    May 21, 2015  |  Jessica Bartlett

    Children’s has acquired its first ever primary care physician group – and it’s not in Massachusetts.

    The pediatric institution is finalizing a deal to acquire Children’s & Women’s Physicians of Westchester (CWPW), a group of more than 276 physicians across 57 locations in New York, Connecticut and New Jersey.

    Executives wouldn’t disclose the terms of the deal, but said they expect it to be finalized by July.

    Dr. Kevin Churchwell, executive vice president of health affairs and chief operating officer at Children’s, details why Children’s sought out connections out of state, and what the changes could be long-term.

    How did this relationship come about?

    This has been over a year of discussion, to be honest with you. It started as a mutual (conversation as) to what the possibilities were or could be between the two groups, knowing we had a history of referrals from the New York area and had relationships with CWPW in Westchester hospital system. The more we talked the more we realized there was a significant amount of synergies of our belief, values, commitment to patients and recognizing how the landscape of healthcare would change. There would be a great relationship of us working together.

    How will this affiliation work?

    They will be part of our community of care as we continue to develop our northeastern pediatric network. They will have their appointments with the New York medical college and their medical staff for Maria Fareri Children’s Hospital at Westchester Medical Center - and that won’t change. They will also be part of our continuum. We will have the intricacies of their board and decision-making we’re working through, but they will be part of Children’s from that standpoint. They will have local administrative oversight there, with Children’s being the parent and helping support that. It’s new for us so we will continue to work through these intricacies as we develop it over the next year or two years.

    You mentioned you’ve already had referrals from New York, so why is this advantageous to Children’s?

    It’s more about patients and the family. We recognize that Children’s is a local, regional, national, international destination for patients with special problems. …they also recognize that there is gap in the continuum of care in patients coming to us and being sent back and how the continuum is lacking and impacts the quality of care we need to provide…

    We’ll have constant communication, the ability to have an impact of care quality at the local level, from resources but also just the protocols that we have and that they have…especially with those children with tertiary (and higher) care needs. That’s important and the future of medicine…expecting that those children will be at Children’s and will have an easy referral to us and that we can refer back and make sure care remains local.

    Will this change CWPW physician rates?

    That’s a great question. Rates continue to change. We expect the rates won’t change overnight but they will be most likely some adaptation with our involvement that we will see, that will be impacted by the landscape but by different legislation and the needs of patients and payers. We will see how that’s going to evolve.

    How many affiliated physicians does Children’s currently have?

    Within children’s we have our departments that are part of the Boston children’s hospital enterprise. That’s a lot of physicians, over 1,000 (approximately 1,300) – specialists in our department and in our permanent care practice that is children’s based.

    We don’t own a primary care practice outside of Children’s. We have alliances within New England, and that is through the Pediatric Physicians' Organization at Children's Hospital Boston (a 300-physician group of doctors at more than 75 practices throughout Eastern Massachusetts) – but we don’t own those practices.

    We will purchase the CWPW practice.


  • 15 May 2015 4:21 PM | Brian Kelley

    Retrieved from Healthcare-informatics.com
    May 15, 2015 by Rajiv Leventhal

    Another bill regarding ICD-10 has been introduced into the U.S. House of Representatives. Rather than call for the new coding set to be prohibited like the most recent bill did, this one pushes for a required ICD-10 transition period following implementation on October 1.

    This bill, H.R. 2247, the Increasing Clarity for Doctors by Transitioning Effectively Now Act (ICD-TEN Act), would “require the Secretary of Health and Human Services (HHS) to provide for transparent testing to assess the transition under the Medicare fee-for-service claims processing system from the ICD-9 to the ICD-10 standard, and for other purposes,” according to a blog post by the Journal of AHIMA (the American Health Information Management Association).

    The bill, introduced on May 12 by Rep. Diane Black (R-TN), would not halt or delay the Oct.1, 2015 implementation deadline for using ICD-10-CM/PCS, nor would it require the Centers for Medicare and Medicaid Services (CMS) to accept dual coding—claims coded in either ICD-9 or ICD-10. However, the bill would require HHS to conduct “comprehensive, end-to-end testing” to assess whether the Medicare fee-for-service claims processing system based on the ICD-10 standard is fully functioning. HHS would be required to make the end-to-end testing process available to all providers of services and suppliers participating in the Medicare fee-for-service program, according to AHIMA.

    Not later than 30 days after the date of completion of the end-to-end testing process the HHS Secretary would be required to submit to Congress a certification on whether or not the Medicare fee-for-service claims processing system based on the ICD-10 standard is fully functioning.

    HHS would need to prove that it is processing and approving at least as many claims as it did in the previous year using ICD-9. If the transition is not deemed “functional” based on this benchmark, HHS would need to identify additional steps that it would take to ensure ICD-10 is fully operational in the near future, according to the bill.

    During an 18-month transition period and any ensuing extensions, no reimbursement claim submitted to Medicare could be denied due solely to the “use of an unspecified or inaccurate subcode,” according to the bill.

    “In the past, Congress has repeatedly delayed the switch from the ICD-9 coding system to the far more complex ICD-10 system out of concern about the effect on providers. Neither Congress nor the provider community support kicking the can down the road and supporting another delay, but we must ensure the transition does not unfairly cause burdens and risks to our providers, especially those serving Medicare patients,” Black wrote in a letter urging fellow legislators to cosponsor the ICD-TEN Act. “During the ICD-10 transitional period, it is essential for CMS to ensure a fully functioning payment system and institute safeguards that prevent physicians and hospitals from being unfairly penalized due to coding errors.”

    The most recent ICD-10 bill, H.R. 2126, introduced by Rep. Ted Poe (R-TX) on April 30, would “prohibit the Secretary of Health and Human Services from replacing ICD-9 with ICD-10 in implementing the HIPAA code set.” Soon after that bill was introduced, AHIMA predicted that it could face difficulty getting through the committee process and to the House floor for a vote.

    Similarly, AHIMA is against this bill as well, as it says ICD-10 contingency plans already supported by CMS have been put in place and are working well. H.R. 2247’s proposed 18-month grace period on coding, where nearly all claims would be accepted, would “create an environment that’s ripe for fraud and abuse,” said Margarita Valdez, senior director of congressional relations at AHIMA.


  • 14 May 2015 10:55 AM | Brian Kelley

    Retrieved from New England Journal of Medicine
    May 14, 2015  |  Austin B. Frakt, Ph.D., and Nicholas Bagley, J.D.

    What if it were impossible to closely study a disease affecting 1 in 11 Americans over 11 years of age — a disease that's associated with more than 60,000 deaths in the United States each year, that tears families apart, and that costs society hundreds of billions of dollars?1 What if the affected population included vulnerable and underserved patients and those more likely than most Americans to have costly and deadly communicable diseases, including HIV–AIDS? What if we could not thoroughly evaluate policies designed to reduce costs or improve care for such patients?

    These questions are not rhetorical. In an unannounced break with long-standing practice, the Centers for Medicare and Medicaid Services (CMS) began in late 2013 to withhold from research data sets any Medicare or Medicaid claim with a substance-use–disorder diagnosis or related procedure code. This move — the result of privacy-protection concerns — affects about 4.5% of inpatient Medicare claims and about 8% of inpatient Medicaid claims from key research files (see table),

     

    impeding a wide range of research evaluating policies and practices intended to improve care for patients with substance-use disorders.

    The timing could not be worse. Just as states and federal agencies are implementing policies to address epidemic opioid abuse and coincident with the arrival of new and costly drugs for hepatitis C — a disease that disproportionately affects drug users — we are flying blind.

    The affected data sources include Medicare and Medicaid Research Identifiable Files, which contain beneficiary ZIP Codes, dates of birth and death, and in some cases Social Security numbers. For tasks common to most health services research — such as combining patient-level data across systems (e.g., Medicare, Medicaid, and the Veterans Health Administration [VHA]), associating them with community or market factors (e.g., provider density or type of health insurance plans available), or studying mortality as an outcome — these are essential variables.

    For decades, CMS has released data on claims related to substance-use disorders to allow researchers to study health systems and medical practice. One early example of such work is a study based on 1991 Medicare claims data that showed that few elderly patients received follow-up outpatient mental health care after being discharged with a substance-use–disorder diagnosis. Patients who received prompt follow-up care were less likely to die, a finding that could not have been obtained without information on patients' precise date of death.2 More recently, a 2010 study used 2003–2004 Medicare claims data linked by Social Security number to records from the VHA to assess the extent to which patients with substance-use disorders relied on the VHA for care.3 Substance-use disorders are among the diagnoses that have been included in the Dartmouth Atlas analyses of geographic variation in Medicare spending — which rely on ZIP Code identifiers — going back to at least 1998. To our knowledge, no patients have been harmed because of data breaches associated with studies such as these.

    CMS has justified the data suppression by pointing to privacy regulations that prescribe the stringent conditions under which information related to the treatment of substance-use disorders may be shared.4 These regulations, which are overseen by the Substance Abuse and Mental Health Services Administration (SAMHSA), already frustrate accountable care organizations and health-information exchanges, since their elaborate consent requirements make it difficult or impossible to share patient data related to substance-use disorders. As a result, many organizations exclude such information from their systems, undercutting efforts to improve care and efficiency.

    For researchers, the problem is more acute. Although the privacy regulations authorize providers to disclose data on substance-use disorders for research purposes, they prohibit third-party payers — including CMS — from doing so. In 1976, when the regulations were first adopted, this prohibition was not a substantial impediment to research. Before computers came into widespread use, researchers could not look to insurers or CMS to provide large claims-based data sets. Even if they could, crunching those data would have been exceedingly difficult.

    But the world has changed. Access to reliable Medicare and Medicaid data has long offered researchers a window into U.S health care.2,3 Indeed, given the unwillingness of private insurers to share their data, Medicare and Medicaid data often provide our only way of gathering information about medical practice, patient outcomes, and costs. The very importance of the data may explain why CMS has long overlooked the prohibition on disclosure.

    In 2013, however, SAMHSA advised CMS that the privacy regulations require suppression of claims related to substance-use disorders. The agency's sudden insistence on this point is puzzling. The law that the privacy regulations are intended to implement states that identifiable data on substance-use disorders “may be disclosed,” even without patient consent, “to qualified personnel for the purpose of conducting scientific research.” Banning CMS from sharing such data with researchers is difficult to square with that statutory exemption.

    Nonetheless, in November 2013, CMS began scrubbing Medicare data of claims related to substance-use disorders. It did the same for Medicaid data in early 2014. No notice was given to the research community about the policy change. Most of our colleagues have been shocked to learn of it; many others probably remain unaware of the change.

    The suppression has skewed Medicaid data more than Medicare data, a disparity that reflects differences between the populations served by the two programs (see table, and the Supplementary Appendix, available with the full text of this article at NEJM.org). In both programs, inpatient claims are much more likely to be affected than outpatient claims.

    In the vast majority of cases, claims are suppressed because the patients have secondary diagnoses of substance-use disorders. That raises an additional concern: many of the withheld data pertain to admissions for services that address not substance-use disorders but rather conditions that may be exacerbated by substance abuse. In other words, the data suppression extends well beyond its intended domain.

    The effects of the CMS actions are thus much broader than they might initially seem. Clearly, it is now infeasible to conduct any study of patients with substance-use disorders based on Research Identifiable Files. But studies of conditions disproportionately affecting such patients — such as hepatitis C or HIV — will also be hampered. Moreover, any study relying on those files cannot make full diagnosis-based risk adjustments that include substance-use–disorder diagnoses. And because the data have been altered in a systematic, nonrandom manner — with suppression affecting different populations, age groups, regions, and providers to different degrees — the results of many studies that have no apparent connection to substance use will be biased.

    And to what end? Without question, protecting patient confidentiality is essential, especially when it comes to potentially stigmatizing diagnoses and treatments. But there is no evidence that researchers — who, under current rules, must adhere to strict data-protection protocols, backed by criminal penalties — cannot appropriately secure research data. And most Americans want their health data to be available for research.5 At the same time, data suppression and access limitations remove from scrutiny a great deal of taxpayer-financed care.

    We believe that the federal government's short-sighted policy will harm the very people it was meant to protect. We encourage SAMHSA and CMS, in dialogue with researchers and providers, to restore access to data that are necessary to improving care for patients with substance-use disorders.


  • 13 May 2015 3:19 PM | Brian Kelley

    Misunderstanding these important tools can put your company at risk – and cost you a lot of money

    Retrieved from CSOOnline.com  |  May 13, 2015

    You’ve just deployed an ecommerce site for your small business or developed the next hot iPhone MMORGP. Now what?

    Don’t get hacked!

    An often overlooked, but very important process in the development of any Internet-facing service is testing it for vulnerabilities, knowing if those vulnerabilities are actually exploitable in your particular environment and, lastly, knowing what the risks of those vulnerabilities are to your firm or product launch. These three different processes are known as a vulnerability assessment, penetration test and a risk analysis. Knowing the difference is critical when hiring an outside firm to test the security of your infrastructure or a particular component of your network.

     Let’s examine the differences in depth and see how they complement each other.


    Vulnerability assessment

    Vulnerability assessments are most often confused with penetration tests and often used interchangeably, but they are worlds apart.

    Vulnerability assessments are performed by using an off-the-shelf software package, such as Nessus or OpenVas to scan an IP address or range of IP addresses for known vulnerabilities. For example, the software has signatures for the Heartbleed bug or missing Apache web server patches and will alert if found. The software then produces a report that lists out found vulnerabilities and (depending on the software and options selected) will give an indication of the severity of the vulnerability and basic remediation steps.

    It’s important to keep in mind that these scanners use a list of known vulnerabilities, meaning they are already known to the security community, hackers and the software vendors. There are vulnerabilities that are unknown to the public at large and these scanners will not find them.

    Penetration test

    Many “professional penetration testers” will actually just run a vulnerability scan, package up the report in a nice, pretty bow and call it a day. Nope – this is only a first step in a penetration test. A good penetration tester takes the output of a network scan or a vulnerability assessment and takes it to 11 – they probe an open port and see what can be exploited.

    For example, let’s say a website is vulnerable to Heartbleed. Many websites still are. It’s one thing to run a scan and say “you are vulnerable to Heartbleed” and a completely different thing to exploit the bug and discover the depth of the problem and find out exactly what type of information could be revealed if it was exploited. This is the main difference – the website or service is actually being penetrated, just like a hacker would do.

    Similar to a vulnerability scan, the results are usually ranked by severity and exploitability with remediation steps provided.

    Penetration tests can be performed using automated tools, such as Metasploit, but veteran testers will write their own exploits from scratch.

    Risk analysis

    A risk analysis is often confused with the previous two terms, but it is also a very different animal. A risk analysis doesn't require any scanning tools or applications – it’s a discipline that analyzes a specific vulnerability (such as a line item from a penetration test) and attempts to ascertain the risk – including financial, reputational, business continuity, regulatory and others -  to the company if the vulnerability were to be exploited.

    Many factors are considered when performing a risk analysis: asset, vulnerability, threat and impact to the company. An example of this would be an analyst trying to find the risk to the company of a server that is vulnerable to Heartbleed.

    The analyst would first look at the vulnerable server, where it is on the network infrastructure and the type of data it stores. A server sitting on an internal network without outside connectivity, storing no data but vulnerable to Heartbleed has a much different risk posture than a customer-facing web server that stores credit card data and is also vulnerable to Heartbleed. A vulnerability scan does not make these distinctions. Next, the analyst examines threats that are likely to exploit the vulnerability, such as organized crime or insiders, and builds a profile of capabilities, motivations and objectives. Last, the impact to the company is ascertained – specifically, what bad thing would happen to the firm if an organized crime ring exploited Heartbleed and acquired cardholder data?

    A risk analysis, when completed, will have a final risk rating with mitigating controls that can further reduce the risk. Business managers can then take the risk statement and mitigating controls and decide whether or not to implement them.

    The three different concepts explained here are not exclusive of each other, but rather complement each other. In many information security programs, vulnerability assessments are the first step – they are used to perform wide sweeps of a network to find missing patches or misconfigured software. From there, one can either perform a penetration test to see how exploitable the vulnerability is or a risk analysis to ascertain the cost/benefit of fixing the vulnerability. Of course, you don’t need either to perform a risk analysis. Risk can be determined anywhere a threat and an asset is present. It can be data center in a hurricane zone or confidential papers sitting in a wastebasket.

    It’s important to know the difference – each are significant in their own way and have vastly different purposes and outcomes. Make sure any company you hire to perform these services also knows the difference.


  • 12 May 2015 9:52 AM | Brian Kelley

    retrieved from GovHealthIT.com  
    May 11, 2015 | Mark Fulford, Partner, LBMC

    In March of 2014, the Office for Civil Rights (OCR) announced that HIPAA audits would start in the fall of 2014. To date, no audits have taken place, and as of this writing, the audit program is still on hold. That said, the OCR is gearing up for the pre-selection process and has announced that audits will commence when the audit portals and project management software are completed.

    Like the start-date, the exact number and types (desk vs. on-site) of audits has been in a state of flux. All indicators, however, point to significantly more than the 115 that were selected as part of the pilot audit program of 2011/2012. Participants will include health plans, healthcare providers and clearinghouses (covered entities), and in a second round, a cross section of business associates.

    For some healthcare organizations, submitting to an OCR audit will be challenging at best. The HIPAA audit pilot program revealed an egregious lack of attention to HIPAA rules and regulations across the industry.  As a result, the 2015 OCR Audit participants can expect a particular focus on areas that had the most significant observations and findings in 2012: lack of risk assessments; attention to media movement and disposal; and institution of audit controls and monitoring. 

    But even if an entity has been reasonably attentive to compliance, it still behooves them to do some upfront research on what to expect should they be selected.

    How to respond

    The OCR has not been particularly forthcoming with information on the upcoming audits, so it’s up to individual organizations to interpret what to expect and how to prepare. But the OCR has indicated that — unlike the 2012 pilot program — the audits will be conducted by OCR personnel rather than by a third party. And unlike last time, the audits will lean more heavily toward desk audits, with onsite audits occurring on a case-by-case basis.

    According to information in presentations from Department of Health and Human personnel, here is what audited entities need to be aware of:

    1. Data request will specify content and file organization, file names and any other document submission requirements.
    2. Only requested data submitted on time will be assessed.
    3. All documentation must be current as of the date of the request.
    4. Auditors will not have opportunity to contact the entity for clarification or to ask for additional information, so it is critical that the documents accurately reflect the program.
    5. Submitting extraneous information may increase difficulty for auditor to find and assess the required items.
    6. Failure to submit response to requests may lead to referral for regional compliance review.

    Document submissions will be no small task, so gathering necessary evidence up front will minimize disruption to day-to-day operations.

    Getting ahead of OCR

    Once an organization receives notification, it should immediately mobilize. If subsequently chosen to submit to an audit, participants will only have a short time to respond. The following provides basic steps for a strategic OCR Audit plan:

    Gather a team. Privacy and security officials should be assigned to a task force responsible for handling audit requests. It’s also a good idea to notify internal or external legal counsel to keep them on stand-by should guidance be necessary.

    Follow guidelines on how to respond. The OCR will provide specific instructions on how and when to respond. The OCR will not look favorably on a delayed response, and if unrequested documentation is submitted, it can be used in all observations and findings.

    Here are some of the areas the OCR audits will cover:

    1. Risk analysis
    2. Evidence of a risk management plan (e.g. list of known risks and how they are being dealt with)
    3. Policies and procedures and descriptions as to how they were implemented
    4. Inventories of business associates and the relevant contracts and BAAs
    5. An accounting of where electronic protected health information (ePHI) is stored (internally, printouts, mobile devices and media, third parties)
    6. How mobile devices and mobile media (thumbdrives, CD’s, backup tapes) are secured and tracked
    7. Documentation on breach reporting policies and incident response policies and procedures
    8. A record of security training that has taken place
    9. Evidence of encryption capabilities

    Question findings if they appear to be inaccurate. Historically, the OCR has allowed organizations to respond to observations and findings. Organizations that have documented all compliance decisions will fare better when trying to defend their position. There are many areas where HIPAA lacks specific direction; the ability to demonstrate a thoughtful and reasonable approach (in writing) will tend to be viewed favorably. 

    By preparing up front and responding in a timely fashion, most OCR audits should progress fairly smoothly. For organizations that have instituted a reasonably compliant security program, there may be little or no follow-up.

    If there are a significant number of observations and findings, an organization may be subject to voluntary compliance activities, or a more in-depth compliance review. Should an in-depth review uncover significant issues, additional corrective action must be taken and/or fines may be imposed.


  • 11 May 2015 11:51 AM | Brian Kelley

    Here is the latest NORSE Attack Map - May 11, 2015.

    Norse is dedicated to delivering live, accurate and unique attack intelligence that helps our customers block attacks, uncover hidden breaches and track threats emerging around the globe. Norse offerings leverage a continuously updated torrent of telemetry from the world’s largest network of dedicated threat intelligence sensors. Norse is focused on dramatically improving the performance, catch-rate and return-on-investment for enterprise security infrastructures.

  • 11 May 2015 11:21 AM | Brian Kelley

    US-CERT Alert published May 7, 2015

    Systems Affected

    Systems running unpatched software from Adobe, Microsoft, Oracle, or OpenSSL. 

    Overview

    Cyber threat actors continue to exploit unpatched software to conduct attacks against critical infrastructure organizations. As many as 85 percent of targeted attacks are preventable [1] (link is external).

    This Alert provides information on the 30 most commonly exploited vulnerabilities used in these attacks, along with prevention and mitigation recommendations.

    It is based on analysis completed by the Canadian Cyber Incident Response Centre (CCIRC) and was developed in collaboration with our partners from Canada, New Zealand, the United Kingdom, and the Australian Cyber Security Centre.

    Description

    Unpatched vulnerabilities allow malicious actors entry points into a network. A set of vulnerabilities are consistently targeted in observed attacks.

    Impact

    A successful network intrusion can have severe impacts, particularly if the compromise becomes public and sensitive information is exposed. Possible impacts include:

    • Temporary or permanent loss of sensitive or proprietary information,
    • Disruption to regular operations,
    • Financial losses relating to restoring systems and files, and
    • Potential harm to an organization’s reputation.

    Solution

    Maintain up-to-date software

    The attack vectors frequently used by malicious actors such as email attachments, compromised “watering hole” websites, and other tools often rely on taking advantage of unpatched vulnerabilities found in widely used software applications. Patching is the process of repairing vulnerabilities found in these software components.

    It is necessary for all organizations to establish a strong ongoing patch management process to ensure the proper preventive measures are taken against potential threats. The longer a system remains unpatched, the longer it is vulnerable to being compromised. Once a patch has been publicly released, the underlying vulnerability can be reverse engineered by malicious actors in order to create an exploit. This process has been documented to take anywhere from 24-hours to four days. Timely patching is one of the lowest cost yet most effective steps an organization can take to minimize its exposure to the threats facing its network.

    Patch commonly exploited vulnerabilities

    Executives should ensure their organization’s information security professionals have patched the following software vulnerabilities. Please see patching information for version specifics.

    Visit here to see the details on and patches for the 30 targeted software threats.


  • 11 May 2015 9:54 AM | Brian Kelley

    by Pete Herzog
    retrieved 4-28-2015

    An attack takes down the web server. An office worker notices there’s no response and calls IT support. So a member of IT support goes to the server room.

    He sees the power is on and all the network cables look okay. He goes to the keyboard to login and sees there’s no shell. Nothing. Where’s the Operating System?

    He thinks they got hacked. So he freaks out and calls the CISO, “The web server is dead. What do I do?”

    The CISO answers, “Don’t panic, I can help you. First, let’s make sure it’s dead.”

    There is a silence. Then a loud smash is heard. Back on the phone, the IT support person says “OK, now what?”

    * * *

    Tell me your cybersecurity strategy. If you have a head for business you probably just said a few words to yourself. It was short. It was concise. It was more information than sentence. You know your cybersecurity strategy by heart.

    But if you’re a cybersecurity consultant then you’re probably still mumbling your pitch. The thing is that unless you’re in the business of selling cybersecurity products and services, you really only have one cybersecurity strategy: don’t lose money. And it’s an integral part of any modern business plan.

    So what exactly is a cybersecurity strategy? A strategy is a plan with the set of goals and objectives to get a specific result. A cybersecurity strategy is a cybersecurity plan with a set of cybersecurity goals and cybersecurity objectives to get cybersecurity as a result.

    People who are into selling cybersecurity strategies like to say it also includes specifics on tools and metrics. But that’s really just a trick of adding tactics to the strategy so it doesn’t sound so useless.

    Yes, useless. Fun fact for you. A cybersecurity strategy is useless. There you go. A free tidbit for you. Enjoy. If you’re on Jeopardy someday, the category is business and the answer is “useless” then you’ll be a big winner. You’ll thank me.

    Yes, useless…

    A CEO gets lost deep in the mountains after dark. He whips out his trusty sat phone and calls the office to look up his location on a map. A cybersecurity consultant happens to pick up.

    The CEO explains his situation and tells him that he needs the fastest way out of the woods.

    The consultant is heard tapping furiously at the keyboard, mumbling to himself as he thinks out loud, and after some time gets back on the phone, “You need to just fly out.”

    The CEO shouts, “How the hell do you expect me to grow wings and fly out?!”

    The consultants answer, “How should I know? I’m a strategist.”

    * * *

    The truth is that if you don’t have a cybersecurity strategy for your business it’s because you’ve inherently got one. You’ve never bothered to formally make one because it’s so obvious. Like how you don’t have a formal not dying strategy.

    Your cybersecurity strategy would likely say you don’t want threats of any sort affecting your assets of any sort now or in the future. Obvious.

    It’s such a no-brainer that if time-travel were invented next week and criminals could go back in time to rip you off then your cybersecurity strategy would still be obvious enough to also include that you don’t want to lose assets yesterday too.

    And you didn’t have to even write it down. Or pay a cybersecurity consultancy a Monopoly-style wheelbarrow full of money to do so. So if it’s useless, why is there such a focus on a cybersecurity strategy? Because tactics are hard.

    Too harsh? No, appropriately harsh. It’s easier (and safer) to make a cybersecurity strategy sound like something important despite meaning nothing than it is to make tactics that work.

    You look better longer too because a cybersecurity strategy can go on meaning nothing a really long time but tactics that mean nothing get noticed right away. And I mean that in a bad way not a Hollywood starlet way.

    I know it’s no surprise to you but cybersecurity is hard. Not only do we not know all of the possible threats but even if we did we still couldn’t know all of the shapes those threats could change into.

    Like if getting wet is a threat then what form will it take? Will it be snow, encroaching glacier, broken pipe, condensation, mis-forecasted hurricane, or the tears of a CISSP trying to create cybersecurity tactics?

    But knowing about threats and what to do about them is not needed or important in a cybersecurity strategy.

    No, a cybersecurity strategy, for real, looks like this. And this one is really truly for real, and swear-to-holy-stuff looks like this. I copied it just like this from an official cybersecurity strategy and then lightly anonymized and generalized it:

    OUR CYBERSECURITY STRATEGY

    1. Securing Company systems – Our clients trust our company with their personal and business information, and also trust us to deliver services to them. They also trust that we will act to protect and advance our business interests. We will put in place the necessary structures, tools and personnel to meet its obligations for cybersecurity.
    2. Partnering to secure vital cyber systems outside the company – Our economic prosperity and our cybersecurity depends on the smooth functioning of systems outside the company. In cooperation with partners and clients we will support initiatives and take steps to strengthen our cyber resiliency, including that of our critical infrastructure.
    3. Helping our users to be secure online – We will assist our employees and clients in getting the information they need to protect themselves and their families online, and strengthen the ability of law enforcement agencies to combat cybercrime.

    The Strategy:

    • Reflects our values such as the rule of law, accountability and privacy
    • Allows continual improvements to be made to meet emerging threats
    • Integrates activity across the whole company
    • Emphasizes partnerships with government, business and academe
    • Builds upon our close working relationships with our allies

    Now was there is a single thing in there that REALLY needed to be written down? How many meetings did it take to write that? How much consultant blood money?

    What’s in there?

    • You will use cybersecurity to not lose assets
    • You will use partners with cybersecurity to not lose assets
    • You will help others use cybersecurity with your stuff to not lose assets

    Check. Check. And Check! Got it! The message is don’t lose assets here just in case you missed it or wanna pay someone to tell you that. And do YOU have that? And I’m saying it’s OKAY that you don’t. Because there’s nothing in there that should be a surprise to you. It’s all obvious.

    Super like wearing a cape obvious. And not just obvious but actually illegal to not consider doing things like following “rule of law”.

    Not to mention the bit about values. Seriously, when’s the last time you thought, “Hey, I’m gonna undertake this task here and I’m not going to do it according to my values. Nope.” Assuming you know what your values are.

    Truthfully, I don’t think I can articulate my own values but I’m pretty sure it would take serious, conscious effort to do something that was not my in my values. Then again to express in writing that I will follow my values has no value to the people who don’t know what my values are or can even articulate their own.

    But it’s a plan. Right? We need plans. And a cybersecurity strategy is a plan. Without which we can’t be a cohesive team making solid cybersecurity, right? Right?

    Wrong. You don’t need fluff telling you that your partners and clients and their families need you to have your act together and not lose their assets or them as an asset or their money which is clearly an asset. You know that. And you probably already have that in your business strategy under the heading Don’t Lose Assets.

    But to have a cohesive team making solid cybersecurity you do actually need to outline what you do. Yes, you do.

    And luckily for you, in cybersecurity, that do is to prevent losing assets. And everyone who wants to be in cybersecurity of any kind already knows this and cares about it and is in no way not thinking that their job is the opposite of not losing assets.

    Those cybersecurity professionals aren’t freaking out about the cybersecurity strategy. And telling them is just so not helpful it’s offensive. You see, a cybersecurity strategy is about as effective as someone telling you to calm down and relax when you’re having an argument.

    No, you don’t need strategy. What you need are tactics. And you need to hire the people who know cybersecurity tactics.

    Cybersecurity tactics are the rubber meets the road. They are the match striking the slate. They are literally the packets smacking the server. They are the way you do the thing you do to the things you have to to have cybersecurity. And that’s hard.

    But you don’t need a cybersecurity strategy because you’ve already got one.

    * * *

    All uses of “cyber” in this column are for keyword use only and by no means does the author imply that using such language is appropriate or cool. Furthermore this author does not condone nor deny the use of the word cyber in any way because the author is okay with the word in general, despite its original definition, because language is a living thing and meanings change.


  • 06 May 2015 4:37 PM | Brian Kelley

    Retrieved from Boston Business Journal  |  May 6, 2015

    Brigham and Women’s Hospital has formed a partnership with a San Francisco-based seed-stage investment fund in an effort to test and potentially integrate digital health startup innovations into the Boston hospital.

    The hospital formed an affiliated medical partnership with Rock Health, and the two organizations are currently in the midst of finalizing plans.

    The partnership is expected to begin this summer and last three years.

    Lesley Solomon, executive director of the Brigham Innovation Hub at Brigham and Women’s Hospital, said the idea is to validate the innovations being funded by Rock Health.

    “We will have the opportunity to collaborate with thought leaders in the digital space developing tech that (has) the potential to dramatically transform health care delivery,” Solomon said. “(We’re trying to figure out) how can we get access to good digital technology that can help us impact patient care.”

    The Innovation Hub helps support internal startups and hosts innovation competitions. Solomon said executives were hopeful that Rock Health, a seed-stage venture fund focused on digital health startups, would also look at investing in Brigham technology, though that wasn't the intended purpose of the relationship.

    “I’m excited,” Solomon said. “For me, Rock Health is a thought leader in the digital space. They have demonstrated that they are committed to helping Brigham entrepreneurs tackle the biggest problems in health care.”

    The startups will be focused around digital health, including devices that connect to the cloud, apps and software platforms, and telemedicine.

    The Rock Health partnership also offers Brigham new access to California startups.

    "It doesn’t limit us from partnering with others, but for us we’ll have the opportunity to talk to the best, work with the best," Solomon said. "And they are based in San Francisco, where we don’t have a presence, so it helps us get access to startups we might not know about here."

    Venture capital firm Bessemer Venture Partners, which has an office in Boston, was a lead investor in Rock Health.



  • 30 Apr 2015 11:37 AM | Brian Kelley

    Retrieved from clinical-innovation.com  |  Beth WalshApr 29, 2015

    Former National Coordinator for Health IT David Blumenthal, MD, penned a blog in the Wall Street Journal's  "The Experts " addressing the potential for health IT as well as challenges related to interoperability and outdated privacy and security regulations. 

    Now president of The Commonwealth Fund, Blumenthal wrote about various scenarios in which health IT tools and mobile applications could help people track and monitor their healthcare by providing interactive, real-time information. But, those advancements can't happen unless electronic devices can communicate with each other. 

    Many EHRs, mobile devices and personal sensors can't exchange information at this point for a variety of reasons but most importantly because "healthcare organizations are fearful of sharing patients' data since it will liberate their customers to go elsewhere for their care." And,  EHR vendors are "charging prohibitive fees and creating other barriers to information sharing" to make it more difficult for customers to "switch out one [EHR] for another," he wrote.

    Blumenthal also wrote that the current privacy and securitiy regulations were conceived and implemented before the internet existed and therefore, "don't offer adequate protections for the 21st century. "If people can't trust the privacy and security of cloud-based health records, they won't feel comfortable using them."

    The obstacles, "mostly human in the making, can be solved by humans if the will exists. If we find a way, the healthcare future will be far brighter for all of us."