On this episode, we explore data neutrality — and why ensuring unbiased, reliable data is fundamental to delivering AI-powered customer experiences.
AI is everywhere in today’s customer experiences, from chatbots handling order inquiries to copilots that help agents provide better support. However, the quality of these experiences depends entirely on the quality of the data powering them. When data is biased or compromised, it can lead to unfair treatment, poor personalization and inconsistent results across customer segments — ultimately damaging the brand trust and loyalty CX leaders work so hard to build.
With 87% of U.S. consumers demanding transparency in how brands source data for AI models, and growing regulatory pressure around data governance, understanding data neutrality has become a business imperative. Our expert guests break down this complex topic in practical terms, sharing strategies for evaluating data sources, implementing proper auditing practices and choosing between custom and off-the-shelf datasets to support your CX goals.
Listen for the compelling insights of Amith Nair, global vice president and general manager of Data & AI Solutions at TELUS Digital, and Professor Renato Vicente, associate professor of applied mathematics at the University of São Paulo and director of the TELUS Digital Research Hub.
Show notes
Read more about the TELUS Digital survey on AI data transparency.
Guests

Global vice president and general manager of Data & AI Solutions at TELUS Digital
Associate professor of applied mathematics at the University of São Paulo and director of the TELUS Digital Research Hub
Episode topics
00:00 - Introduction
02:21 - How do experts define data neutrality?
03:18 - What's driving AI adoption in business today?
04:27 - How are businesses using AI in practical applications?
06:23 - Why is data quality fundamental to AI-driven CX?
08:46 - What risks come with poor data management?
10:33 - What should leaders look for in data partners?
12:03 - How does synthetic data impact data neutrality?
13:27 - What happens when biased data enters CX systems?
15:40 - What role do off-the-shelf datasets play?
19:49 - What does proper dataset curation involve?
21:01 - How should companies test datasets before implementation?
23:59 - How do you choose between custom and off-the-shelf data?
25:25 - What's next for AI data management?
28:20 - What key takeaways should CX leaders remember?
Transcript
[00:00:01] Amith Nair: Trust nobody, authenticate everything. Same applies when it comes to using data for AI.
[00:00:06] Renato Vicente: You need experts to tell you where these subtle biases are.
[00:00:10] Amith Nair: Neutrality will become a baseline expectation. It's not a differentiator.
[00:00:16] Robert Zirk: From AI powered recommendation engines personalizing your website to chatbots routing customer inquiries to predictive analytics identifying customers at risk for churn, AI is everywhere in today's customer experiences — and that means the quality of your CX depends, in large part, on the quality of the data you use to power it.
[00:00:37] However, not all data is created equal. Data that's biased or compromised can negatively affect customer interactions and, in turn, damage the loyalty and trust your brand has worked so hard to build.
[00:00:53] That's why understanding data neutrality is crucial for today's CX leaders. And our expert guests will break this down in a way that's clear, practical, and relevant to your CX goals.
[00:01:05] So today on Questions for now, I'm joined by Amith Nair, global vice president and general manager of Data & AI solutions at TELUS Digital, and Professor Renato Vicente, associate professor of applied mathematics at the University of São Paulo and director of the TELUS Digital Research Hub, as we ask: Why should CX leaders care about data neutrality?
[00:01:36] Robert Zirk: Welcome to Questions for now, a podcast from TELUS Digital where we ask today's big questions in digital customer experience. I'm Robert Zirk.
[00:01:47] Robert Zirk: To help unpack data neutrality, I spoke with Professor Renato Vicente. He's an associate professor of applied mathematics at the University of São Paulo, where his research focuses on machine learning, and he's the director of the TELUS Digital Research Hub, which is exploring how AI agents and large language models can collaborate to tackle complex problems.
[00:02:10] Renato Vicente: For example, we are teaming up with the School of Medicine right now to help them to use AI to analyze and figure out how relevant published evidence is when it comes to answering critical medical questions.
[00:02:21] Robert Zirk: Renato defined data neutrality as the practice of ensuring collected data doesn't affect the larger dataset in a way that skews its outputs or leads to inaccurate results.
[00:02:31] Renato Vicente: It means making sure that the data we collect is not affected by bias or consistent errors that could distort what we are actually trying to understand about the data, about the population generating this data.
[00:02:46] So of course you can think about very important fairness and ethical concerns, but it's also a question of efficiency because you use the data to make decisions in the end and the decisions are only as good as our ability to recognize patterns in the real world connected to ethical concerns, but also to business efficiency.
[00:03:10] Robert Zirk: According to Zendesk, 63% of consumers express concerns about biases and unfair treatment from AI systems.
[00:03:18] Amith Nair: AI is no longer a novelty. It's a high-priority business investment.
[00:03:22] Robert Zirk: That's Amith Nair, global vice president and general manager of Data and AI solutions at TELUS Digital. His team solves next-gen training data requirements to advance frontier AI.
[00:03:34] Over time, he's observed organizations shifting from narrow experimentation to a serious focus on leveraging AI in strategic initiatives, including those that are relevant to CX.
[00:03:46] Amith Nair: Most C-level executives, whether it's CEOs, CTOs, will now rank AI in their top three technology priorities. Even if you rewind to 12 months ago, there were so many different needs that enterprises had, or their vision for AI was so different, and most enterprises we spoke to were in that ideation phase with lots of great ideas that then moved into prototype phase very quickly. But that's essentially where it stopped.
[00:04:12] Most of these prototypes, it became very apparent very quickly that they did not meet any business critical needs or they just could not operationalize it.
[00:04:20] Robert Zirk: Amith has seen businesses shift their focus from AI for automation to leveraging it for advanced decision support.
[00:04:27] He shared an example where TELUS Digital used computer vision in the farming and agriculture industry to help winemakers optimize their harvesting process.
[00:04:36] Amith Nair: For the winemakers, harvesting times were fluctuating very rapidly. What was maybe a one to two week shift is now shifting to multiple months and the predictability of when to harvest the grapes as it ripens was starting to get very difficult. So with computer vision using drone imaging as well as other imaging, we could predict when exactly to harvest the grapes so as to make the wine. So the loss of grapes was a big issue and that's starting to reduce because of this kind of decision making support from AI.
[00:05:08] So that's one. In other areas, enterprises are investing in smarter AI reasoning, so chain of thought, multi-step logic, agentic or autonomous AI that can plan and act independently, and also multimodal AI that combines text, image and data inputs.
[00:05:24] Robert Zirk: Amith pointed out that there's been an uptick in the use of AI across business functions.
[00:05:29] Amith Nair: We are seeing that with off-the-shelf solutions and in-house platforms that are now integrated into CRM, supply chain, HR, customer support. So all of these areas are seeing business applications develop from an AI perspective as well.
[00:05:42] There's also a growing emphasis on using smaller, specialized AI models that are very tailored to specific industries or domains. Rather than relying solely on these large general purpose models that are heavy and take a lot of computational power, what this does is it delivers better accuracy and compliance, especially in regulated industries like finance and healthcare.
[00:06:03] Robert Zirk: Think about what Amith just described: AI integrated into customer support systems, CRM platforms that personalize customer interactions and specialized models built for specific industries. All of these AI-driven customer experiences can only work as well as the quality of the data powering them.
[00:06:23] That's why data neutrality, ensuring this data is as unbiased and reliable as possible, is fundamental to delivering experiences customers expect — and why it's a top concern Amith hears in conversations with business leaders.
[00:06:39] Amith Nair: The biggest challenge right now is the relevance of data, the transparency of data and the increased focus on making sure the data that is collected is within the guidelines and rule sets that are defined. And there are so many different compliance issues that are being defined both by regulators as well as by the industry themselves.
[00:06:57] Robert Zirk: A recent TELUS Digital survey shows that 87% of US consumers want transparency in how brands source their data.
[00:07:05] And beyond consumer expectations, Amith indicated that enterprises are also facing growing regulatory pressure. Stanford University's 2025 AI Index Report notes an increase in AI-related regulations, with U.S. federal agencies having issued 59 in 2024. That's more than double compared to 2023 — and involving twice as many agencies.
[00:07:30] Amith Nair: Enterprises are increasingly asking for a combination of multiple factors that help them be reliant on the right type of data. There's growing regulatory pressure.
[00:07:41] Governments and regulators worldwide are introducing or enforcing laws that require transparency in how AI models are trained and what data they rely on. For example, the EU AI Act, there's GDPR. U.S. has specific state laws: for example, in California, there's CPRA — but all of these demand that data be provenanced, consented and that you can explain how you gathered the data. And if you're not compliant, then the legal and financial penalties could be very large.
[00:08:08] Robert Zirk: Renato pointed out that, in regulated industries, the ways in which businesses retain, curate and validate data is also subject to regulations.
[00:08:19] Renato Vicente: Some industry sectors are more, by now, experienced with this issue of regulating data, and I think as we are going to use AI everywhere, the trend is that all the other industries will have to reach the same standard.
[00:08:35] Robert Zirk: A Gartner study shows that 63% of organizations lack confidence in their AI data management practices — or lack appropriate AI data management practices altogether.
[00:08:46] In addition to regulation, Amith noted that data auditing and verification is also driven by risk management.
[00:08:54] Amith Nair: Opaque data sourcing introduces a bunch of risks such as copyright infringement, bias and discrimination, security risks from poorly governed or malicious data sources. So you really should audit and verify these datasets to reduce those liabilities.
[00:09:09] There's also issues around ethical AI. There's ESG commitments or environmental social governance commitments. Most of the organizations are publicly committing to ethical AI and these ESG standards. So transparent data sourcing then supports fairness, accountability, sustainability. And it helps align AI used with the company's values and social responsibility goals.
[00:09:30] Robert Zirk: Transparency sheds light on where data is sourced, but it's not enough on its own. This is where the principle of data neutrality comes into play and why data neutrality is becoming a more critical issue for enterprises. According to Amith, all of these factors that can affect the neutrality of data come down to safeguarding the organization's reputation.
[00:09:53] Amith Nair: AI-related controversies can severely damage the brand trust. Any public backlash against AI systems that are trained on stolen or harmful data, or it could be based on deepfakes or biased chatbots, has made enterprises really cautious. So transparency in data sourcing protects their brand image ultimately, and then builds trust with customers, investors and the public in general.
[00:10:17] Robert Zirk: When it comes to sourcing data with neutrality in mind, Amith emphasizes that cost and scale shouldn't be the only considerations. Leaders need to thoroughly evaluate their data partners, examining their technical expertise and approach to data management.
[00:10:33] Amith Nair: Business leaders should really be able to assess the ethical, technical and operational integrity of the partner and their data practices. So business leaders should really push for and demand transparency of data sourcing. Ask questions like: “Is it scraped? Is it licensed? Is it crowdsourced or is it proprietary?
[00:10:51] “What are the sources and the origins of this data? What is your consent mechanism and what processes have you put in place to enable this?” It’s important to understand the terms of use and the licensing agreements of this data that is collected. Any vague or incomplete explanation, especially for large scale or multilingual datasets, should be an immediate red flag.
[00:11:11] Robert Zirk: Another consideration for leaders as they look to ensure the neutrality of their data is to understand their data partners’ biases in auditing and mitigation practices.
[00:11:21] Amith Nair: For example, does the partner themselves conduct any kind of bias audits on their database? That's an important question to ask. Another important verification is to check for security and privacy and consent practices. Is the data legally obtained with clear user consent? How do you ensure no PII…
[00:11:39] Robert Zirk: …that's Personally Identifiable Information…
[00:11:42] Amith Nair: …is included without proper anonymization? Also, what are your compliance requirements with respect to GDPR, CPRA, HIPAA et cetera, et cetera?
[00:11:52] Robert Zirk: Renato raised another important consideration: the proliferation of synthetic data — data that's artificially generated by AI systems that tries to mimic real world data.
[00:12:03] Robert Zirk: Synthetic data has use cases in certain scenarios — for example, when it's used for testing without using real customer data.
[00:12:11] The problem is when synthetic data isn't properly labeled or curated and is then incorporated into existing data pools, which can pose major adverse effects on data neutrality.
[00:12:23] Renato Vicente: It's similar to counterfeit money. If it's not a crime to print your own money, then there’s gonna be chaos in the economy. So it's a serious crime to use fake money and, I think, probably we are going to have to have some kind of regulation along the same line.
[00:12:42] Robert Zirk: With the increasing presence of synthetic content and data, Renato emphasized the importance of expert curation.
[00:12:50] Renato Vicente: If you have a dataset that is about some kind of health condition, usually a physician can tell if it's fake or not. So you need expert knowledge and you need new techniques and you need some regulation as well. I think you can be optimistic if you have these three things in place.
[00:13:11] Robert Zirk: Remember, synthetic data can be useful in different scenarios. The key lies in proper labeling practices to tell synthetic and real world data apart, and coupling that with the expertise to leverage both types of data to your advantage.
[00:13:27] Robert Zirk: Beyond the challenges of synthetic data, there's another critical issue: CX leaders need to understand what happens when any type of biased data makes its way into your customer-facing AI systems, whether it's data that underrepresents certain customer segments or simply wasn't properly vetted for fairness.
[00:13:46] Biased data can lead to inconsistent experiences, unfair treatment and ultimately damage the customer relationships you've worked hard to build. Renato emphasized that trust is fundamental to how data neutrality influences outputs.
[00:14:01] Renato Vicente: If your data is not neutral, what's going to happen is that you're gonna have biased customer experiences. You can end up having unfair treatment — for example, poor personalization, inconsistent results across different groups — and that's pretty damaging for business because you have social networks, right? So when you have experiences that are very different across different groups, people will broadcast that and that's pretty damaging. So you have to be really serious about data neutrality because you can hurt trust in your brand and, of course, satisfaction with your products.
[00:14:40] Robert Zirk: Amith emphasizes that biased data outputs can have consequences far beyond just poor customer experiences.
[00:14:48] Amith Nair: If the data is not neutral, from an enterprise standpoint, it puts them in a lot of legal issues because of all the governmental regulations that we talked about earlier or because of copyright infringement issues or because you are unlocking or releasing someone's PII. This all means that they could be legally constrained or they might run into legal issues. And beyond that, that also has a huge amount of impact on their brand. So it's not something that any enterprise wants to do, and the more checks and balances they put in as part of the model creation, the better it is for them from an outcome perspective.
[00:15:23] Robert Zirk: As the conversations about data neutrality continue among business leaders, organizations may be looking toward off-the-shelf datasets. Renato notes that these ready-made data collections come with their own considerations when it comes to neutrality and compliance.
[00:15:40] Renato Vicente: I think the value is in off-the-shelf data that has been also curated by experts because then, you can guarantee that the biases have been treated adequately. And not only curated by experts, but also you need to have documentation about the process of creating these datasets. So it's not only off-the-shelf, but curated, expertly curated off-the-shelf data — that's where the value is.
[00:16:11] Robert Zirk: Renato shared an example illustrating how experts compare patterns to detect biases in datasets — and why it's important.
[00:16:19] Renato Vicente: Population patterns that you can find, for example, in health data. Those patterns in general reflect geographic, social, historical specificities. It's not like having a statistical test that you can automate. You need curation. And neutrality here is always talking about neutrality together with curation by experts because I think there is the value. The value is in having humans that are experts finding or helping to find or reduce biases.
[00:16:56] Robert Zirk: Amith noted TELUS Digital recently launched off-the-shelf datasets in response to market demand.
[00:17:02] Amith Nair: We were getting a lot of requests from our existing customers that use non-OTS datasets for specific OTS datasets. And it was inevitable that we started to provide this to our customers.
[00:17:12] Robert Zirk: He highlighted several practical, strategic and technical advantages of off-the-shelf datasets. The first is accelerating development speed:
[00:17:22] Amith Nair: In general, it enables fast time to development because they eliminate the need to collect and curate large amounts of data from scratch, which, in turn, means that the time needed to train and deploy these models, especially in early experimentation or MVP stages, reduces dramatically. It's also cost-efficient because curating, cleaning, labeling and maintaining high quality datasets can get expensive and off-the-shelf datasets allow teams to skip these upfront costs and leverage pre-structured data.
[00:17:49] Robert Zirk: Renato shared a few other examples.
[00:17:52] Renato Vicente: If you think about math teachers, for example, they might use AI assistants to answer students' questions 24-7. We have been working with intensive care physicians, physicians that are specialized in cancer treatment and they are always very stressed 'cause the demand is really high and they are just a few specialists. So you can augment their capabilities with AI agents, for example, to assist with diagnosis. If you consider these examples — engineers, teachers, physicians — if you're gonna augment them with AI agents, you have to make sure that those AI agents learned their subjects very well because they're gonna be used for high-stakes decisions. So you need datasets that are carefully curated by domain experts.
[00:18:42] Robert Zirk: In addition to a fast start, Amith notes that off-the-shelf datasets can also offer diversity and scale to improve generalization and reduce model brittleness.
[00:18:53] Amith Nair: High-quality commercial datasets often come with clear licensing terms and documentation, which also means this helps enterprises stay compliant. And, as I mentioned earlier, this is especially important in regulated industries like finance, healthcare and legal. Also, in domains where data is rare or sensitive, for example, medical imaging or historical texts, off-the-shelf datasets can provide a valuable starting point for transfer learning or augmentation.
[00:19:19] Robert Zirk: And as brands look to minimize bias and prioritize data neutrality, Amith notes that off-the-shelf datasets can be a great option. He says they offer…
[00:19:28] Amith Nair: …measurably safer, fairer and more transparent foundation for training AI. But I’ll caveat that by saying that nothing is perfectly unbiased, although that is the attempt: try to get as unbiased as possible. They are a key tool for enterprises looking to build trustworthy, inclusive and compliant models.
[00:19:45] Robert Zirk: So what does the curation process of a dataset entail?
[00:19:49] Amith Nair: Expert curation involves intentional selection, labeling and cleaning of data to remove any inappropriate, offensive or skewed content. This should help reduce toxic, biased or non-representative samples that can cause harmful model behavior. Also, good OTS data should provide a balanced representation that strives for demographic, geographic and linguistic balance to avoid over-representing or under-representing specific groups.
[00:20:17] So expertly built datasets usually come with come or data sheets that detail how the data was collected, what populations are represented, what are the known limitations or risks. This transparency really supports better model auditing and responsible deployment. And curation of these datasets also includes legal and ethical reviews to ensure the data was obtained with proper consent and usage rights. It also avoids use of unauthorized or exploitative content, which can, again, both be unethical and legally risky.
[00:20:50] Robert Zirk: Amith reminds leaders that proper due diligence needs to involve sandbox testing to evaluate AI responses before a dataset is fully implemented in an AI model.
[00:21:01] Amith Nair: Take a subset of the data and check for certain aspects of privacy, certain aspects of compliance. Take small sample sizes and start with that. In fact, I would recommend doing that beforehand, before actually creating the models. So enterprises themselves need to have some sort of compliance verification audit tracking systems within their systems within their organizations to make sure that the data that's provided is compliant. But that needs to happen, in my opinion, even before it is utilized.
[00:21:28] It's that old security paradigm, right? Zero-trust security. What does that mean? Trust nobody, authenticate everything, but the same applies when it comes to using data for AI. Everything that AI is built on is based on the data itself. So it is your responsibility. Irrespective of what anybody says, you have to make sure that the data is clean and aligns with all of the best practices that you want, which means you have to do your own trust analysis.
[00:21:54] Robert Zirk: Amith recommends two crucial steps to protect against data supply chain risks: verify your data sources and ensure that you and your data partners alike maintain strict security measures.
[00:22:06] Amith Nair: How do you trust the partner? And if your supply chain is the partner, then if the partner themselves does not have all those checks and balances put in, then that is a problem.
[00:22:17] The second is: are you using only that as your source of supply chain? Which, in most cases, most customers tend to have multiple sources.
[00:22:25] So how do you centralize the alignment of data that you receive? Within your organization, what's the hierarchy of data movement? And even once the data comes in, how do you protect the data from veering off from what it's supposed to access?
[00:22:39] But those are all security compliance aspects that you need to have internally and you need to insert those in right now before you actually start to execute.
[00:22:47] The challenges that I've seen with a lot of companies right now and that they're running into is many of these policies are put in place after the data is brought in. That can often be late. You really need to understand, just as you do for internal IT security or development security or product security, you also need to start implementing and having policies around AI security. And more often than not, that comes in from the perspective of AI data.
[00:23:13] Robert Zirk: Building on these internal processes, Renato recommends leaders consider the following questions when evaluating data sources:
[00:23:21] Renato Vicente: Where does the data come from? Can you trust this data? You have to consider if the data is representative, well-documented, if the source of the data is compliant to legislation. And you also need to find — [it] always is a good idea to have alternative data sources. So, actually, you can compare, you can do some cross-validation to check if the data makes sense or if it has some weaknesses.
[00:23:53] Robert Zirk: At the end of the day, businesses need to ensure they're working with high-quality, compliant data.
[00:23:59] The questions Renato raised are important considerations to factor in as organizations weigh the benefits between choosing custom data solutions and off-the-shelf datasets.
[00:24:10] Amith referenced three points of comparison: cost, quality and speed. And how you weigh each of those points depends on your use case, industry, and risk appetite.
[00:24:22] Quality comes down to whether specific data relevant to the problem you're trying to solve is available. If that specific data isn't available on the market through OTS datasets, for the best outcomes, you might need to look to a more custom data solution.
[00:24:38] Amith Nair: Example: medical diagnostics, legal summarization, financial forecasting. There is no relevant or high quality off-the-shelf dataset. You need model outputs that align tightly with proprietary workflows or tone. You choose off-the-shelf datasets when you are building general purpose models, like sentiment analysis, summarization, object detection and you need rapid prototyping or pre-training and industry benchmarks are acceptable for your task. And, as mentioned earlier, other considerations should be speed to deployment. So if time to market is important, then the obvious winner is an off-the-shelf dataset. Or if cost is an issue, then, again, an off-the-shelf dataset might be the correct option for you.
[00:25:20] Robert Zirk: I asked Amith how he sees the AI data ecosystem evolving in the next few years.
[00:25:25] Amith Nair: There's a lot of data requests that are coming in specifically for GenAI use cases and AGI use cases. So that's an area that we have to get really good at. There's a huge amount of focus on crowdsourced data. Now, that's not going away. Crowdsourced data will still continue, but specific data around tasks that are oriented towards specialists is becoming more and more prominent, especially for GenAI data. So it might be data that is very specific to, for example, a math-based implementation in Portuguese, right? So you have to find an expert who understands both Portuguese and is also an expert in math.
[00:26:03] And specific aspects of math might be algorithmic, it might be logarithmic, whatever that is. But that's the evolution that we are starting to see in the market specifically around that.
[00:26:12] Robert Zirk: Furthermore, as AI capabilities grow, it'll be even more crucial to build and maintain customer trust.
[00:26:19] Renato sees a concerning trend emerging in the way that people trust AI, where it's fluctuating between extremes instead of maintaining a balanced perspective.
[00:26:29] Renato Vicente: People will sort of oscillate between trusting too much AI and not trusting at all. And for your brand, actually, it can be pretty dangerous because if people trust too much and then they don't trust at all, when they start broadcasting how unsatisfied that they are with your products on social networks, the way to deal with that is more control over your data and also being transparent on how we are processing data, in particular personal data.
[00:27:03] Trust depends on transparency. And when you are transparent about your data, you have to justify why you had made some decisions about your data and then you need the experts and you need techniques to identify and reduce biases, for example.
[00:27:19] Robert Zirk: On top of the challenges Renato described, Amith points to a fundamental shift in the way businesses approach data neutrality.
[00:27:27] Amith Nair: Neutrality will become a baseline expectation. It's not a differentiator, right? Trust will be earned through transparency, representation and governance, not just accuracy or performance and data provenance tools, audits and third-party certifications will become standard in enterprise AI pipelines. It is important to note that true neutrality is complex. No dataset is perfectly neutral, but transparency, intentional balance and ethical sourcing will be critical steps towards trusting that data. And the future of trust is not just about being bias-free, it's about being bias aware, and hence, transparent and being responsible.
[00:28:05] Robert Zirk: We've covered the current landscape of data neutrality, how expertly curated off-the-shelf datasets can benefit brands while aligning with data neutrality principles and strategic considerations for leaders when sourcing datasets.
[00:28:20] As we wrap up, Renato emphasizes that one of the most important things for CX leaders to remember is that customer experiences are designed to be human-centric, so it's important to ensure your approach to AI keeps that value in mind as well.
[00:28:36] Renato Vicente: We like to call our work here at the Hub “human-centric AI” because I think we should think about AI not [as] an autonomous thing, but something that works together in our main human capabilities.
[00:28:49] Robert Zirk: And Amith reminds CX leaders that data neutrality is no longer just a technical consideration, but a core business imperative — one that's increasingly shaping customer experiences.
[00:29:01] Amith Nair: Data neutrality is here to stay. It is important for enterprises, organizations all over the world when it comes to building the right models. It's important for us as consumers to understand where the data came from and to make sure that it is neutral and unbiased. Ultimately, the consumers define how a lot of the AI models will be used, and if we are defining that and pushing for ethical usage of data, neutrality will be one of the key factors driving it.
[00:29:35] Thank you so much to Amith Nair and Professor Renato Vicente for joining me and sharing their insights today. And thank you for listening to Questions for now — a TELUS Digital podcast.
[00:29:47] Robert Zirk: For more insights on today's big questions in digital customer experience, be sure to follow Questions for now on your podcast player of choice to get the latest episodes as soon as they're released.
[00:29:59] I'm Robert Zirk, and until next time, that's all… for now.
Explore recent episodes
Suggest a guest or topic
Get in touch with the Questions for now team to pitch a worthy guest or a topic you’d like to hear more about.
Email the show