News

How valuable is your AI trust currency?

Presented by Modzy


AI is proving its ability to find patterns hidden within troves of data, accelerate decisions and predictions based in fact, and save us time, energy, and money. Yet, even with recent advancements and investments, organizations with AI in production are still the exception rather than the rule.

In a world hampered by the need for instant gratification, end users and stakeholders quickly become skeptics when an AI pilot fails or a high-profile AI implementation results in unintended consequences. The psychological implications caused by doubt and second guessing can quickly lead to discouragement and distrust of good solution ideas.

If trust is the currency of business and life, how will a digital-first world and increased reliance on machine learning technologies affect your AI trust currency balance? Trustworthy AI is entirely dependent on building trust in your people, processes, data, tools, and models, while simultaneously establishing mechanisms for building trust into your AI itself.

Trust in your people (and suppliers)

A culture of trust that emphasizes respect, safety, and building relationships with the people creating, deploying, and overseeing your AI, coupled with accountability for all, is critical for long-term success. When recruiting members to your team, beyond the obvious alignment of values, assess experience and expertise relative to your domain, mission, and technical needs.

Similar trust culture building tactics should apply to your data science and software suppliers after contract award (they’re people too). However, during procurement due diligence processes, verification of ethics/values, experience, expertise, commitment, and reputation is paramount in order to filter out the disingenuous vendors that lack the credentials, agility, and transparency.

Trust in your processes

There’s a reason why process maturity models are so popular — the consistency and control you gain as you move up in maturity creates added confidence. It shows that your organization can follow time-tested processes to achieve its goals and objectives.

The same goes for data science and software engineering methods that focus on developing, governing, monitoring, and retraining AI models in production. The integration and orchestration of modern DevSecOps (code + infrastructure), DataOps (data), and MLOps (models) can provide an end-to-end lifecycle of capabilities. Together these enable automation, flexibility, and variability for data science and software engineering teams. In addition to building trust, the processes establish consistency, reliability, efficiency gains, and provide assurance to leadership that the appropriate processes are being followed.

Trust in your data

Whether an organization is flush with labeled data or relies on externally sourced data, it is important that domain subject matter experts are able to verify the training and validation data provenance and lineage. Data is critical for machine learning, and machine learning is essential for any AI application. Incorrect or sub-quality inputs into a machine learning model will always produce faulty outputs and distrust.

Part of the process should also include steps to ensure that data is free from bias and exhibits fair treatment among protected groups, as well as complies with privacy and usage rights. Lastly, data security is not typically a top of mind concern for a data scientist. Given the growing adversarial threats to AI, it is important that technical steps are implemented to determine if training or inference data is poisoned, and to build models that can natively defeat the threat.

Trust in your tools

Today’s AI is dependent on the data scientist(s) who built the model — to explain how it works, how it should be used, or when it behaves erratically and requires further attention. It’s counterintuitive that we have found ourselves with more work trying to manage the very thing that should make work easier.

Fortunately, in the last year, certain MLOps and ModelOps software tools have emerged to help organizations manage their AI deployments and alleviate that dependency.

MLOps software tools offer capabilities to standardize packaging, deploying, managing, scaling, and monitoring AI models in production. Organizations also gain a centralized repository for their AI models, admin features to manage use and permissions, insight into model performance metrics, and the ability to pull audit logs for completed jobs.

ModelOps software tools go even further, providing customized dashboards, alerting and reporting features that provide transparency and insight into overall model performance and drift. The standardization provided by these tools creates an established specification for users to follow, decreases ambiguity, and improves quality and productivity. Software tools are a way to build trust into the very fabric of the AI systems you develop.

Trust in your models

Data scientists are inherently wary of models they didn’t build themselves. To overcome this hurdle and establish trust in a pre-trained model, metadata must be accessible to end users, including model version and assumptions/notes, performance metrics, and explanations of model architecture, training and validation data sets.

The more you allow users to interact with a model — whether to reproduce results or test/retrain the model with their own data — the more comfortable they will likely become. In addition, providing auditing, explainability, and monitoring capabilities provides understanding and transparency into model performance, and ultimately the trust needed to use the model in production.

For AI to become ubiquitous, trustworthiness must be a key tenet. Without it, AI advancement will likely be met with significant resistance. We’re at a tipping point today. Laying the foundation and investing in building trustworthy AI results in an AI trust currency balance that can create long-term benefits for all organizational stakeholders. Don’t counterfeit your AI program’s success!

Josh Elliot is Head of Operations at Modzy.


Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact sales@venturebeat.com.

Let’s block ads! (Why?)

Entrepreneur – VentureBeat

Leave a Reply

Your email address will not be published. Required fields are marked *