Your Company’s Values Are Expressed Algorithmically

Boards must tailor committee charters to support ethical tech oversight and make tech risk discussion a recurring part of meeting agendas.

AI is transforming the foundations of business strategy, challenging board members to navigate an evolving landscape where data-driven decisions carry profound ethical and reputational implications. As algorithms increasingly drive how companies operate and engage with stakeholders, boards face a pivotal question: How can we ensure these powerful tools align with our company's core values and long-term goals? The largest tech companies haven't exactly done the rest of us any favors. New generative AI players have scraped content from across the web to train their models, sparking debates over copyright, consent and ethical data usage. While these practices may be technically legal, that's often because technology has outpaced existing laws, allowing companies to operate in gray areas where ethical boundaries are blurred.

Social media platforms employ AI algorithms that exploit well-known psychological tendencies to drive engagement and maximize advertising dollars. Dominant marketplaces embed dark patterns into their AI-powered platforms, nudging both consumers and businesses into purchases or decisions they might otherwise avoid. Even well-known professional networking and collaboration platforms have crossed ethical lines, quietly scraping user data to train AI models and burying opt-out options deep within their settings.

This approach creates a marketplace where the ends justify the means, pushing the limits of acceptable behavior as companies exploit these legal gaps for competitive advantage. While these methods have undoubtedly fueled rapid growth and scale, they come at a significant cost. This widespread blurring of ethical lines raises serious questions, especially as younger generations of consumers and professionals grow increasingly attuned to and wary of manipulative tactics.

For companies that lack the vast market power of tech giants, these practices raise challenging boardroom issues about how to conduct business ethically in an AI-enabled world. Given the poor precedent set by industry giants, how should boards respond? SEC Chair Gary Gensler has warned that AI systems can create conflicts of interest by optimizing for corporate goals over user well-being, subtly prioritizing profit at the expense of fairness and transparency. This warning underscores the critical role of directors in ensuring that AI-driven strategies align with ethical standards and protect the long-term interests of all stakeholders.

- Advertisement -

This Time, It's Different

Advertising has always used psychology to influence behavior. Since the mid-20th century, Madison Avenue employed everything from celebrity endorsements to emotional appeals, encouraging audiences to associate products with certain lifestyles or social statuses. While these techniques were powerful, they were also broad, reaching general audiences through television, print, and radio.

With the digital revolution, this landscape changed dramatically. Tech companies have harnessed the immense power of data, enabling them to tailor ads with a level of specificity that was once unimaginable. Today's platforms now know everything from our shopping habits to the websites we visit, which allows them to serve hyper-personalized ads that align with our interests and vulnerabilities — whether we're individuals or businesses.

While the ability to target specific segments may appear to be an advancement, it has a darker side. The once-subtle nudge of marketing in the analog world has transformed into a calculated push, often using psychological manipulation to drive behavior. This shift poses unique challenges for companies that must operate responsibly, even as big tech companies lead with aggressive, data-driven tactics.

Hyper-personalization, enabled by vast amounts of data, allows companies to engage audiences with an uncanny level of accuracy. But this precision also means that companies can exploit personal and organizational vulnerabilities to a degree that was previously impossible. Techniques commonly used today include:

  • Scarcity and FOMO. By creating a sense of urgency with countdown timers and limited-time offers, digital ads can prompt quick, impulsive purchases. While Madison Avenue once relied on slogans like “Act Now!”, AI-driven ads take this much further. Algorithms today use detailed insights — even down to our current mood, based on recent searches or activities — to make scarcity feel intensely personal and timely. This can drive impulsive decisions not only for consumers but also for business buyers making high-stakes operational or strategic purchases under perceived time and cost pressure.
  • Social proof. Digital platforms leverage social proof by showing which of our colleagues or industry peers have engaged with similar products or services. While this can add a reassuring layer of familiarity, it can also exploit subtle forms of peer pressure, nudging businesses or professionals to make decisions driven more by a desire to fit in than by a genuine need. Algorithms can prioritize endorsements from influential connections, amplifying a sense of FOMO and prompting quicker buying decisions. Over time, this erodes genuine choice and can create a business culture driven by perceived social expectations rather than strategic priorities.
  • Manipulative design. Many digital platforms employ “dark patterns” or design techniques that nudge users — whether they're consumers or business professionals — toward certain actions. For example, default settings on apps and platforms often maximize data collection, making opting out difficult and frustrating. While traditional ads might have relied on persuasive slogans, dark patterns exploit user interface design to drive specific behaviors without explicit consent, impacting not just individual choices but also business decisions.

These tactics, while effective, can undermine autonomy and leave users feeling manipulated. Unlike traditional ads, which had clear endpoints (a TV commercial ends when the show resumes, a magazine ad is a single page), hyper-personalized digital ads follow users everywhere, continuously presenting decisions that may not be in their best interests.

In both B2C and B2B contexts, these issues extend beyond individual consumers to affect entire organizations. AI-driven techniques in the business world influence procurement, investment and strategic decision-making, creating pressures for quick decisions under the guise of urgency or scarcity. For companies that don't wield the influence of big tech, ethical AI practices are more than just a matter of compliance — they're essential for building sustainable growth and fostering trust.

The Ethical Dilemma

Board members find themselves on the horns of an ethical dilemma: These powerful techniques undeniably drive revenue and engagement, goals that management teams and model developers are often incentivized to pursue. Yet, as the largest players continue to blur ethical lines (and seemingly get away with it), how should boards weigh what is right and wrong in a digital world where the boundaries are increasingly unclear? SEC Chair Gensler stresses the importance of building “guardrails” into AI systems to ensure they align with users’ best interests and comply with evolving ethical standards.

Navigating this challenge requires a nuanced approach that balances immediate financial gains with the long-term impacts on trust, reputation and organizational integrity. In a digital landscape where ethical lines are often blurred, boards may find themselves asking, “How are we supposed to manage this responsibly?” It's not an easy task — AI-driven strategies bring immense pressure to deliver results, while the biggest players set questionable precedents that can make ethical governance feel like an uphill battle.

Why This Matters Now

Younger generations, particularly Millennials and Gen Z, are more aware of these manipulative tactics than previous generations — those of us typically around the boardroom table. Having grown up in a digital world, they are attuned to the subtle ways in which data is collected and used to influence their decisions. They understand that their online actions — clicks, likes, purchases — are part of a larger algorithm designed to anticipate their next move. This awareness has made them more skeptical of corporate intentions and less forgiving of perceived manipulations.

Recent surveys show that younger generations — primarily Gen Z and Millennials but also Gen X — are increasingly skeptical of companies that push ethical boundaries, particularly regarding data privacy and manipulative marketing tactics. These generations are not only acutely aware of the strategies used by large tech companies but are also willing to act against them. Whether by calling out brands on social media, participating in public boycotts or switching to competitors that better align with their values, younger consumers are exercising their influence in the marketplace. Research reveals that nearly two-thirds of Gen Z and Millennials choose brands based on values that align with their own, with many willing to pay a premium for ethical products.

These generations view hyper-personalized ads with suspicion, often perceiving them as invasive and manipulative, especially when they involve data exploitation or lack transparency. This sentiment contrasts sharply with older generations, who might view personalization as a convenience. Instead, younger generations frequently see these tactics as an infringement on their autonomy and an example of corporate overreach. Their collective skepticism has only been amplified by the unchecked power of major tech firms, which often seem to operate with relative impunity.

What Does “Ethical” Mean?

From a board member’s perspective, the ethical landscape of AI is further complicated by the divergent views of what constitutes “ethical” behavior among different stakeholders — consumers, developers and corporate leaders. Directors are tasked with overseeing AI deployment in ways that balance innovation and ethical responsibility, but they must do so in a world where these constituencies often disagree on where the boundaries of ethical behavior lie.

Consumers: Ethics as privacy, fairness and transparency. Consumers tend to view AI ethics through the lens of personal rights, emphasizing the importance of privacy, fairness and transparency. Many people worry about how their data is used and whether AI systems treat them fairly. According to Cisco, 84% of consumers are concerned about data privacy and 80% are willing to take protective measures. Issues like bias in AI — highlighted by ProPublica, which found higher error rates in facial recognition for people of color — raise serious concerns about fairness. Furthermore, the opaque, “black box” nature of many AI systems can fuel distrust, as people feel uncomfortable with AI-driven decisions that impact them directly, such as credit scores or job screening, when they lack transparency. From the consumer’s perspective, ethical AI practices mean respecting privacy, ensuring fairness and providing transparency — standards they expect companies to uphold.

Developers: Ethics as innovation, efficiency and optimization. For developers, the conversation around AI ethics often centers on innovation, efficiency and optimization. Developers are frequently driven by the challenge of creating powerful, scalable AI models and may view ethical considerations as secondary to technical excellence. According to a study published in Communications of the ACM, many developers lack formal training in ethics, leading to unintentional oversights as they prioritize model performance over ethical concerns. Developers may see the use of personal data — particularly when anonymized or aggregated — as a necessary trade-off to create effective, high-performing systems. They are often focused on using AI to solve complex problems efficiently, sometimes leading them to regard privacy or fairness issues as constraints on innovation. This perspective can differ sharply from that of consumers, who may not see the technical benefits but are sensitive to potential personal risks.

Corporate Leaders: Ethics as profitability, sustainability and stakeholder balance. Corporate leaders, including board members, approach AI ethics from a broader, more strategic perspective that involves balancing the needs of various stakeholders. Directors are tasked with ensuring that AI practices align with company values, regulatory requirements and the expectations of consumers, investors and regulators. According to a Deloitte report, while 62% of executives see AI as essential for maintaining competitiveness, only 35% have established ethical guidelines for AI, highlighting the tension between growth and ethical governance. For corporate leaders, ethical AI practices involve finding a sustainable balance between profitability and social responsibility. They must consider how AI-driven strategies contribute to long-term value creation while respecting broader ethical standards. In contrast to developers, who may prioritize technical performance, corporate leaders are increasingly aware of the reputational risks associated with unethical AI and are responsible for navigating this complex ethical landscape on behalf of the organization.

This divergence in perspectives adds layers of complexity for directors, who are tasked with overseeing AI practices across a spectrum of ethical standards. The challenge is further compounded by varying cultural norms: What may be considered ethical in one region could raise concerns in another. For example, Western countries such as the United States and those in Europe emphasize individual privacy and data protection, while some Asia-Pacific countries are more accepting of data sharing for societal advancement. In this environment, directors must develop flexible yet robust governance frameworks that respect these diverse viewpoints and ensure that the company’s AI practices align with its values, long-term goals, and the expectations of its global stakeholders.

Boards Need to Get This Right

For companies operating without the protective scale of big tech, adopting ethical AI practices isn't just about regulatory compliance — it's a strategic imperative for building trust and sustainable value. Board members may find themselves questioning where to draw the line, especially when revenue and growth are closely tied to these powerful, data-driven strategies. But how, practically speaking, can boards approach this challenge?

The National Association of Corporate Directors’ recent Blue Ribbon Commission report on technology leadership offers timely guidance for addressing AI's ethical challenges. It is essential reading for corporate governance leaders today. By incorporating these elements into committee charters and board discussions, directors can play an active role in ensuring that AI aligns with company values and mitigates risks to trust and reputation.

Incorporate Trust and Values into Enterprise Risk Management (ERM)

  • Ask probing questions. Boards should engage management in explicit discussions about how AI practices impact customer and public trust. Questions like “Has the risk of damaging customer trust been integrated into our ERM framework?” or “How will AI usage affect our brand reputation over time?” can help identify potential blind spots and align risk management with ethical considerations.
  • Define Values-Centered Oversight. Boards must establish clear agreements with management about which values-centered decisions should come to the board. This ensures that AI-related matters, particularly those with ethical implications, are given appropriate attention, aligning business strategy with corporate values at the highest level.

Tailor Committee Charters to Support Ethical Technology Oversight

  • Compensation and talent committee. Expand this committee's responsibilities to include the company's technology talent strategy. Oversight should include reviewing incentives for senior management and ensuring they support ethical AI development rather than merely short-term growth. This guardrail ensures alignment between strategic technology objectives and ethical values, particularly in fast-evolving areas like AI.
  • Nominating and governance committee. This committee should oversee ongoing board education on technology and AI ethics. This includes setting proficiency standards for directors and keeping the board informed about key ethical issues, from data privacy to transparency in AI. These steps ensure directors are equipped to make informed decisions on complex topics.
  • Audit committee. With its focus on internal controls, this committee should extend its oversight to data governance and digital applications. Regular monitoring of data-related controls will help mitigate risks associated with data privacy and bias, which are foundational to building and maintaining consumer trust in AI-driven products and services.
  • Risk committee. This committee should take a lead role in assessing technology-related risks, including those tied to AI, cybersecurity and emerging digital threats. Its charter can include responsibilities like evaluating insurance related to technology risks and reviewing risk disclosures related to AI governance.

Make Technology Risks and Opportunities a Recurring Agenda Item for the Full Board

  • Emphasize the risks of inaction. As technology continues to advance at a breakneck pace, boards must weigh the risks of not adopting AI or other innovations against the ethical and operational risks they introduce. Regular discussions about AI's broader implications, such as its role in competitive positioning or potential regulatory concerns, can help the board anticipate and adapt to shifts that impact the business.
  • Balance growth with ethical guardrails. Directors should lead conversations with management about balancing growth and operational efficiency with the company's values. Questions like “How will we balance revenue growth with ethical AI usage?” can prompt critical discussions on how to align business outcomes with ethical principles. Setting these guardrails helps ensure that technology-driven initiatives support long-term resilience and the company’s reputation.

In today's AI-driven world, ethics is not just a principle — it's a choice that every company must make. As Chair Gensler's recent warnings remind us, AI practices that prioritize profit over people can compromise trust and create systemic conflicts of interest. A company's true values are now expressed algorithmically, as the decision-making embedded in AI systems reflects and amplifies these values across every interaction. With the stakes this high, boards must recognize the urgency to ensure that algorithms responsibly balance the perspectives of consumers, developers and corporate leaders. Every board should be actively discussing how to implement the NACD Blue Ribbon Commission's recommendations, embedding ethical oversight into committee charters and governance frameworks. By choosing to lead responsibly, boards can help their companies thrive in a marketplace where innovation and integrity go hand in hand, securing sustainable growth and long-term trust in a rapidly evolving digital age.

About the Author(s)

Tom Petro

Tom Petro is director and chair of the risk management and trust committees of Univest Financial Corporation, director and chair of the finance committee of USA Nordic Sport and director of Fintegra. He is managing general partner of 1867 Capital Partners and board member in residence of Mach49.


This is your 1st of 5 free articles this month.

Introductory offer: Unlimited digital access for $20/month
4
Articles Remaining
Already a subscriber? Please sign in here.

Related Articles

Navigate the Boardroom

Sign up for the Directors & Boards weekly newsletter for the latest news, trends and analysis impacting public company boardrooms.