BACK

With the advent of AI and big data technologies, companies are now more than ever relying on computer vision to provide data for trustworthy insights to help them make smart business decisions that maintain compliance, create more personalized customer experiences, and improve staff efficiency. 

It’s no doubt that computer vision is transforming how companies function and engage. Yet, as computer vision firmly embeds itself into the IT mainstream, concerns are growing over its potential misuse.

Building ethical AI models for computer vision

Companies that use computer vision have a responsibility to consider how the AI models that drive it impact all stakeholders, such as customers, suppliers, employees, and society as a whole. 

When building AI models for computer vision, some questions to be considered…

  • What data can or may be included or processed?
  • Who can view the data?
  • How can we create algorithms that don’t make unethical or biased decisions?

Training with synthetic datasets

One way to mitigate ethical concerns is to use synthetic data creation processes to train computer vision machine learning (ML) models. 

Synthetic data is and can be anonymized and created manually or artificially apart from data generated by real-world events. Think: Sim-like 3D environments. This allows developers to produce millions of anonymized images needed for ML training at a relatively low cost, saving organizations from the costly and error-prone process of stripping personal information from collected data. 

Synthetic data creation also minimizes privacy risks and reduces the likelihood of data bias. 

Data anonymization

Even better, when capturing real-life data to generate insights, companies can take the extra step to de-identify individuals. This includes blurring faces on camera feeds, not recording or storing any footage, and removing any personally identifiable information (PII) from datasets.

At meldCX, we made a decision early on in our AI journey to not capture any PII by turning individuals into a tokenized anonymous persona—a random number in the system. More detail and depth is then added into the anonymized persona through objects, such as the clothes the person is wearing, and non-face behavior, such as movement and gait.

Segmenting user roles 

As a tool for communication and collaboration, computer vision analytics are at their best when all areas of a business can fully participate and glean value from them. 

To maintain the security of data, computer vision platforms should have flexible and customizable security permissions that allow for an appropriate balance of collaboration and control. 

For instance, permissions can be set to restrict everyone from viewing videos except the Security Lead, and granting access to the Marketing team to view only non-video data output from the platform dashboard.

Regulatory bodies promoting ethical AI

Globally, the industry is heading toward ethical AI regulation across the board, not just for computer vision. 

All 194 member states of the United Nations’ Educational, Scientific, and Cultural Organization (UNESCO) have unanimously adopted a series of recommendations on ethical AI. These recommendations aim to realize the advantages of the technology while reducing the human rights risks associated with its use. 

Additionally, companies such as TrustArc provide third party independent assessments and certifications to companies such as meldCX to ensure that technology providers adhere to privacy regulations such as GDPR and ISO/IEC 27001.

Businesses can leverage these tools and resources to ensure their computer vision systems meet the highest standards of ethics and to get ahead of compliance before regulations go into effect. 

A collective responsibility

In this information age, data is power, and with that comes great responsibility.

Computer vision is a powerful tool, and it’s up to everyone to address tough ethical questions to establish best practices that uphold human dignity. 

All teams—from research and data science to executive levels—are equally responsible for making sure that ethical and privacy standards are top-of-mind. This process begins from ideation and continues all throughout the entire product lifecycle.