Industry Experts Discuss Big Data Challenges: Security, Privacy, Quality – AI Business

2 minutes, 45 seconds Read
image

Experts from leading companies in energy, health care and telecoms outlined the challenges and opportunities of leveraging big data at the AI Summit London Wednesday.� 

Panelists said they were using data to personalize customer experiences and make informed decisions, but are facing issues around security and privacy when collecting and storing relevant data.

Elena González Garcia, a lead operations and maintenance engineer at Scottish Power, said her company struggles with integrating data obtained from third parties like contractors.

“We may be receiving information from my contractors, but there’s nothing I can do to improve the quality of this information or the accessibility to this data simply because I am constrained by a contract,” Garcia said.

She said businesses should consider agreeing to better data-driven contracts with third parties.  

“I am the data owner of things that are like activities that are being performed on my assets, so I should be able to access this data in a very friendly and easy way and not fighting with contractors,” Garcia said. 

Speakers emphasized the importance of maintaining a data platform for experimentation and empowering teams to interrogate data in a safe environment.

The panel said businesses should consider exploring sandboxes and separate environments for experimenting with data before moving to production.

Related:Generative AI in the Spotlight at AI Summit London

Jamaria Kong, managing director of TowerBrook Capital Partners, whose portfolio contains companies including TXO, AA and Maxor, said each of their companies have separate environments to test new capabilities.

Kamal Jain, a principal data engineering manager at BT, said his company also has testing sandboxes which provide staff with a platform to test and experiment with actually relevant data.

Jain explained that the data that goes into its sandbox environment is rigorously checked for quality by the company’s in-house tools to ensure it’s suitable and safe to be reused by multiple teams.

“Ensuring that it’s a safe environment for the teams to experiment out, but also underline having a single source of truth into the production environment, which helps build the different AI use cases,” he said.

Also speaking on the panel was Pavithra Rajendran is a senior data scientist at the Great Ormond Street Hospital for Children. She explained how cybersecurity is a top priority when it comes to handling and managing patient data.

“It is very sensitive data, the cyber [considerations] are much more complex compared to other trusts,” Rajendran said. “We did a proof of concept with a cloud provider and what we found that our security was too much to be able to start exploring cloud options.”

Related:Generative AI in the Spotlight at AI Summit London

Another issue the panel discussed was data quality and having good enough data to test and deploy new solutions.

The panelists told attendees to consider training staff to understand data quality to improve the data a business has at its disposal. 

“I want to emphasize the importance of people,” Garcia said. “Sometimes it’s not about having a super fancy algorithm, [instead] implementing guidelines that can and make sure the people that interact and create these things are part of the loop.”

Beyond training Kong said her firm was exploring using generative AI to clean up data quality. Jain, meanwhile, said BT has tasked hyperscaler partners with performing data quality checks.

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts