AI, Leaders, & Accountability: Are We Repeating History's Biggest Mistake?


Fact Box: Leadership, AI & Accountability

  • Historical precedent: Pre-industrial leaders often faced direct, personal risk from their decisions (e.g., kings on the battlefield).

  • Ethical codes: Principles like noblesse oblige and chivalry demanded courage, stewardship, and honour from the ruling class.

  • Industrial Revolution shift: Rise of capitalist elites reduced direct personal accountability, replacing it with profit-driven, impersonal power.

  • Modern problem: Political and corporate leaders today are often insulated from the immediate consequences of their decisions.

  • AI risk: Algorithmic decision-making can further dilute responsibility and obscure accountability.

  • Governance challenge: Design AI oversight structures with clear accountability ownership and transparent decision logs.

  • Key question: How do we re-embed “skin in the game” for both human and AI-mediated leadership in the 21st century?


The story of the Industrial Revolution, as it’s often told, is one of inevitable progress, a righteous march from a "darker" age into the light of modernity. This narrative, largely penned by the liberal victors of that profound societal shift, champions the overthrow of an old, supposedly stagnant order. But does this triumphalist account obscure a more complex reality? What if the pre-industrial aristocracy, frequently caricatured as merely decadent and exploitative, actually shouldered burdens and adhered to codes of conduct that have become dangerously diluted in the centuries since? At the heart of this re-examination lies a fundamental difference in how leaders, past and present, have faced the consequences of their actions.

In an age marked by complex global challenges and the rapid ascent of artificial intelligence, a searching question casts a long shadow over our governance: Who is truly accountable? Whether observing the often distant decision-making of modern political leaders or contemplating the opaque operations of AI systems, the lines of responsibility risk becoming dangerously blurred. Before we delegate more of our collective fate to either remote human authorities or nascent machine intelligence, a journey into our past offers vital perspective. The leadership models of the pre-industrial world, particularly the aristocracy, provide a stark contrast, revealing crucial lessons about what happens when power becomes unmoored from direct, personal consequence—a cautionary tale for both the corridors of contemporary power and the coming age of AI.

Consider the visceral reality for the ruling class in eras past. For kings, princes, and dukes, leadership wasn't a remote-controlled exercise. They didn't merely authorize wars from secure bunkers or distant offices; they often led their forces onto the battlefield, their own lives explicitly on the line. Figures like Harold Godwinson at Hastings (1066) or Richard III at Bosworth Field (1485) exemplify an era where the ultimate ramifications of leadership decisions were immediate and intensely personal. This profound "skin in the game" fostered a directness of responsibility that often seems diluted in modern political systems, where decisions of immense consequence can be made by those insulated from their immediate impact. This historical model forces us to ask: how can we ensure a similar weight of consequence for today’s leaders, and what does it mean when AI, which feels no consequences at all, begins to act?

This personal investment was once interwoven with societal codes like chivalry and the principle of noblesse oblige – "nobility obligates." These were not merely quaint customs but, at their best, ethical frameworks demanding courage, stewardship, and a degree of honour from the ruling class. While imperfectly upheld, the expectation of virtuous leadership, answerable to peers and potent moral authorities, existed. A tarnished reputation, built on perceived failures of these codes, could be devastating. As we scrutinize the complex machinery of modern governance and the emerging autonomy of AI, we must ask: What are the modern equivalents of these binding codes, and do they effectively hold power to account, be it human or algorithmic?

The pre-industrial social order, often characterized by its three estates (clergy, nobility, commoners), also forged intricate, if unequal, interdependencies. The aristocracy's privileges were, in theory, counterbalanced by duties: ensuring security, administering local justice, and offering a measure of protection to their dependents, especially in times of crisis. The lord of the manor was a visible, tangible presence, his fate intertwined with that of his domain. This contrasts sharply with the often impersonal and bureaucratic nature of modern states or vast corporate entities, where decisions affecting millions can emanate from distant centres of power, and where AI could further abstract these relationships. The old system, for all its faults, maintained a direct link—a visibility of power and its impact—that is often missing today.

The Industrial Revolution dramatically reconfigured these dynamics, ushering in a new elite: the capitalist and industrialist class. Bolstered by philosophies championing individual enterprise, this new power often viewed traditional ethical constraints as fetters on progress. As technological change, like agricultural mechanization, uprooted rural populations, these displaced masses became the workforce for factories. Here, many new leaders operated with a primary allegiance to profit, often detached from the holistic well-being of their workers. The well-documented exploitation of the early industrial era, detailed in reports like the Sadler Report (1832), highlights a pivotal shift towards a more impersonal form of power, where economic forces could seem autonomous and the direct, observable burdens of the old aristocracy were replaced by a more diffuse, less personally felt responsibility. This historical uncoupling of power from immediate, visible consequence serves as a sobering precedent, not only for understanding certain tendencies in modern political and economic leadership but also as we contemplate the profound detachment inherent in AI decision-making.

In celebrating the undeniable progress since those times, we must also critically assess what may have been eroded: a degree of direct personal accountability in leadership and the tangible, if hierarchical, bonds of visible, interdependent communities. As we navigate the complexities of the 21st century—from global political instability to the ethical frontiers of AI—these historical reflections become ever more pertinent. If leaders of the past were, by the very structure of their society, more directly vested in the outcomes of their choices, this starkly contrasts with modern scenarios where political accountability can be obscured by layers of bureaucracy, partisan divides, or the sheer scale of globalized systems. AI threatens to add another layer of obfuscation, potentially amplifying this existing challenge of diluted responsibility.

The challenge, therefore, is multi-faceted. It involves not only designing AI governance with a profound awareness of these historical lessons but also fostering a renewed culture of tangible accountability among our human leaders. How do we ensure that those in positions of political power today feel the weight of their decisions with a clarity that mirrors, in modern terms, the directness experienced by past leaders? How do we prevent both complex human systems and the "black box" of AI from becoming shields for the evasion of responsibility?

The lords and ladies of bygone eras, for all their manifest flaws, understood leadership as carrying an intrinsic, personal weight. As we confront a future where power can be wielded through increasingly abstract and technologically mediated means, we must urgently rediscover and reinscribe this fundamental principle of vested interest. This means demanding transparency, clear lines of responsibility, and a felt sense of duty from our political leaders, even as we strive to build AI systems that are aligned with human values and subject to meaningful human oversight. Ultimately, the quest for accountable power—whether in human hands, political structures, or emerging AI—is one we must collectively champion.

Paul F. Accornero

Paul F. Accornero is a C-suite leader, global strategist, and the author of the forthcoming book, The Algorithmic Shopper. He currently serves as the Global Chief Commercial Officer for one of the world's market-leading consumer goods companies, where he is a key architect of its global commercial strategy. In this role, he directs a multi-billion-euro business with a P&L spanning over 120 countries and is responsible for the performance of thousands of employees worldwide.

Paul stands at the intersection of classic brand building and the next frontier of commerce. His career has been defined by leading profound organizational and digital transformations for some of the world's most iconic consumer brands. For over a decade at the L'Oréal Group, he was instrumental in shaping commercial policy and strategy across the Asia Pacific region, including serving as Chief Commercial Officer for the Consumer Products Division in P.R. China. Since 2008, he has been a driving force behind the globalization of his current company, spearheading the omnichannel strategies that have successfully navigated the disruption of the digital age. His leadership has a proven track record of delivering exceptional results, including driving revenue growth exceeding.

His unique perspective is not merely academic; it has been forged through decades of hands-on operational experience and senior leadership roles on multiple continents. He has served as CEO, President, or Managing Director for major subsidiaries in the USA, Japan, and Singapore, giving him an unparalleled, ground-level view of the global commercial landscape he deconstructs in his work.

A rigorous strategic framework complements this extensive real-world experience. A graduate of the University of Queensland, Paul completed his postgraduate business studies at Harvard Business School, where he studied disruptive strategy under the world’s foremost thought leaders, including the late Clayton Christensen. This blend of C-suite practice and elite academic insight makes him uniquely positioned to write the definitive playbook for the age of AI-driven commerce.

As an active and respected industry leader, Paul is a Fellow of both the Institute of Directors (FIoD) and the Chartered Institute of Marketing (FCIM) in the UK. He is also a Liveryman of the World Traders Livery Company and a Freeman of the City of London, affiliations that connect him to a deep network of influential business leaders.

The Algorithmic Shopper is more than a book; it is the culmination of a career spent leading on the front lines of commercial evolution.

https://theaipraxis.ai
Previous
Previous

The New Customer: Is Your Business Ready to Sell to AI?