Update: Deep6 AI’s spokesperson clarifies that the un-protected database in question was a test environment that contained dummy data from MIT’s Medical Information Mart of Intensive Care (MIMIC) system:

Advertisements

Despite recent claims, no personal or patient health data was accessed, leaked, or at risk from a Deep 6 AI proof-of-concept database.

In August, a security researcher accessed a test environment that contained dummy data from MIT’s Medical Information Mart of Intensive Care (MIMIC) system, an industry-standard source for de-identified health-related test data. To confirm, no real patient data or records were included in this ephemeral test environment, and it was completely isolated from our production systems.

Based on current reporting, we have confirmed that the recent claims reference MIMIC data, and there was no access to real patient records. When the researcher notified us in August, we immediately secured the test environment to ensure there was no further concern.

Data security and privacy is a top priority at Deep 6 AI, and the responsibility to protect data is at the core of our business and top-of-mind for all our people.” – Deep6 AI Spokesperson.

 


A non-password-protected database with a total size of 68.53 GB, which contained 886,521,320 records of medical-related data was discovered by Security researcher Jeremiah Fowler with the Website Planet research team.

Upon further research by the team, there were multiple references to Deep6.AI including internal emails and usernames. They immediately sent a responsible disclosure notice and public access was restricted shortly after. The records appear to contain data of those based in the United States.

 

“The exposed records revealed Physician Notes that provided intimate details of patient illness, treatment, medication, family, social and even emotional issues. These were very complete descriptions and it was surprising just how many small details were included in these notes. It is a rare look behind the scenes of how these notes look and the kind of information that is collected by medical workers.” Jeremiah Fowler wrote in his blog.

 

As a security researcher I know very well how difficult it can be to search through massive amounts of data and identify what is sensitive and what is not. The basic idea of AI is for a machine to use large amounts of data to learn, become smarter, and predict accurate results from that data in a short amount of time. The same concept that makes artificial intelligence a functional solution is also the same process that makes AI at risk from a cyber security standpoint. – Jeremiah

Izaan Zubair
Izaan's inquisitive in technology drove him to launch his website Tech Lapse. He usually writes pieces on emerging technology, anime, programming and alike niches. He can be reached at [email protected]
5 1 vote
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments

You may also like

More in:News