We reviewed more than 50 risks in detail and devised a process to prioritize them.
After much discussion, we decided to assign 3 workgroups the task of intensively reviewing the risks as follows.
There were fifty-five items in the list. We clustered them into eight categories.
1 – Access management (7 items)
2 – Policy and other (7 items)
3 – Content management (9 items)
4 – Monitoring and containment (9 items)
5 – Desktop (8 items)
6 – Mobile computing (4 items)
7 – Data network (6 items)
8 – Facilities (5 items)
Those related to Facilities are part of our multi-year disaster recovery project. They involve mechanical, electrical and environmental monitoring. Although disaster recovery is a component of HIPAA’s security rule, we’ll decided to exclude these from our scoring activity. The projects already underway and are largely managed by dedicated IS teams.
To collapse the rest into the three groups, we clustered our risks into categories and teams as follows:
Team 1 — Access management and policy/other (14 items)
Team 2 — Content management and monitoring/containment (18 items)
Team 3 — Desktop, mobile computing and data network (18 items)
We assigned 5 members to each team including appropriate representatives from IS and compliance. We also designated team leader.
We created a scoring spreadsheet and gave each team the following instructions.
“For items 1 to 5, please use a 1 to 5 Likert scale for your ratings. As you can see, the lower the rating the less work, less impact, and less risk. Vice versa for higher ratings.
1. Rate the workforce impact or “disruption factor.” Rate from 1 to 5; minimal to significant. Do this for both the initial (first 6 months) and on-going impact.
2. Probability the vulnerability we are trying to protect against will occur. Rate from 1 to 5; unlikely to very likely.
3. Impact if the vulnerability does manifest itself. Rate from 1 to 5; minimal to significant.
4. Overall Compliance effort required. Rate from 1 to 5; minimal to significant.
5. Overall Information Systems effort required. Rate from 1 to 5; minimal to significant.
6. One-time capital estimate. Consider application software, professional services, training, hardware, data base software, and other items normally charged to one-time capital for projects such as these.
7. One-time internal labor. Estimate in FTE’s. For example, a project requiring 520 hours of internal labor would be (520/2080) or .25 FTE. Consider the full range of activities normally undertaken to bring a system into production.
8. Recurring internal labor. Post-go live support also expressed in FTE.
9. Recurring maintenance and purchased services. Annual cost.
10. Recurring – other. Any remaining recurring support cost not included above. Annual cost.
11. Overall priority – 1 to 14 for Team 1 and 1 to 18 for Teams 2 and 3.
In addition to filling in the spreadsheet, please document whatever other factors you considered or would recommend with regard to the risk item. For example, you may suggest that an item be broken up into two or more projects to address the most important elements (80/20 rule) and keep parse the costs.”
We asked the team leaders to submit their completed spreadsheets by June 22 so that everyone has a chance to review their work before our next planning meeting on June 27.
On June 26th, the team leaders will meet with me to consolidate all their recommendations into a single list.
On Wednesday, June 27th from 2-4pm, we will meet with all the stakeholders to present a summarization of Team deliverables, complete a consolidated ranking of all risk items, set a tentative timeline for each item by fiscal year, and identify a sponsor or lead for each item.
The end result will be a multidisciplinary compliance priority list and work plan for the next two years.
I’ll let know if this formal process works to bring order to a large body of work. At the point, I’m optimistic