Op-ed by Gohar Sargsyan, Tata Consultancy Services, Europe and Ernesto Damiani, Università degli Studi di Milano, Italy. This is an abridged version of the full paper, which can be found here.
The sustainability regulatory landscape is incredibly complex, characterised by overlapping requirements, regulatory gaps, and the resulting proliferation of new rules. This complexity stems not only from the difficulty of understanding and interpreting these regulations but also from the challenge of effectively integrating them into the day-to-day business operations of organisations.
Large companies, in particular, are swamped by tonnes of regulations they must comply with. These include the Corporate Sustainability Reporting Directive (CSRD), EU Taxonomy Regulation, Corporate Sustainability Due Diligence Directive (incorporating the Sustainability Omnibus Package introduced in February 2025), and the EU Green Bond Standard.
The EU has also implemented several regulations to promote gender equality, aligning with broader sustainability and social responsibility goals, such as the Gender Equality Strategy 2020-2025, Women on Boards Directive, and the Equal Pay Directive.
In addition to these regulations from public space imposed by governments, companies additionally have policies and strategies that provide frameworks on ethics, privacy, safety, security, diversity and inclusion, gender equality, and more. In short, public regulations, company policies, and internal strategies must all be considered in corporate decision-making.
The decision-making process
Companies driven by profit naturally seek cost-effective solutions when faced with new regulations, which often require investments of time and money. AI tools, capable of rapidly processing regulations and optimising for cost reduction and profit maximisation, become attractive options.
However, the complexity of sustainability and gender equality regulations necessitates multifaceted decision-making involving various organisational levels, from middle management (responsible for implementation) to senior leadership and the board, as well as departments like Compliance, Legal, Human Resources, Sustainability, and Internal Audit.
While organisational strategies and policies may evolve, decision-making processes often remain unchanged.
Critically, middle management’s direct implementation of policies and regulations and their potential focus on cost and profit creates a conflict of interest when AI implementation is delegated to them. This is because they might prioritise cost reduction and profit maximisation over regulatory directives from entities like the European Union, and the United Nations, or strategies and policies from their board of directors and CEO.
This is especially problematic if the AI systems lack social cost and ethical considerations, potentially leading to decisions that clash with regulatory requirements and the organisation’s stated values. The European Union AI Act is a significant step towards establishing a legal framework for ethical and trustworthy AI [1].
How Generative AI can implement the regulations
The transformative potential of Generative Artificial Intelligence (GAI) based on Large Language Models (LLMs) is undeniable and poised to reshape business, government, and society.
Recent work [2] highlights how generative AI, built on LLM, is poised to revolutionise business and society by enabling innovative strategies and operations. A recent TCS study on reimagining business with GAI revealed that 57% of CEOs and top decision-makers around the world are optimistic about GAI’s potential impact on their business, and 72% of the companies surveyed are planning to rework towards adoption of GAI [3].
From The Economic Times, according to Gartner, by 2026, 20% of organizations will use GAI to reduce 50% of middle management roles [4].
Many organisations are turning to GAI for both implementation and decision-making in these areas. First, we need to consider how public regulations and company policies are affected when processed by GAI. If GAI modifies content, it is not simply a routine substitution; it’s the insertion of a new element into the model, effectively transforming the original program into a new one.
This transformation, mediated by GAI, involves training the ‘transformer’ – the GAI component responsible for the modification. This AI-powered transformation drives any change to the object’s state.
To use a metaphor, AI is taking an initial ‘love letter’ (the original policy or regulation) and rewriting it into a new “love letter” (the modified version) designed for maximum impact. If the AI’s transformation doesn’t achieve the desired effect, it is deemed unsuccessful. From a regulatory perspective, even if guidelines are applied to the initial ‘love letter,’ the transformed version is fundamentally different because the AI transformer has mediated it.
Crucially, this transformation often operates on publicly available information. A key question emerges: How is the public space, the shared information environment (the “commons”) impacted by the widespread use of transformer models? There is little doubt that public data significantly impacts generative AI models’ optimisation and ethical outcomes, emphasising the importance of high-quality data over sheer volume.
Public data curation will be crucial in transformer models’ performance and ethical behavior in generative AI [5]. This warrants further investigation in a separate study.
With the increasing complexity of ESG, sustainability, and gender equality regulations, how do companies effectively implement policies and regulations to achieve these objectives?
A key concern arises when AI heavily influences corporate decision-making, as AI models are often optimised for cost efficiency. While cost optimisation is a valid input for a transformer model, the model may be unaware of regulatory obligations. Companies might claim their decisions align with strategy and policy documents, yet a paradox emerges.
If the practical implementation is driven solely by profit maximisation and cost minimisation, neglecting social costs, a potential conflict arises with the actual decision.
In other words, it is ‘now or never’ to inject ethical AI into the optimisation model rather than relying on post-hoc evaluations.
How do we inject ethics into AI-based models?
Injecting and integrating ethical considerations into all types of AI optimisation models, especially in LLM-based GAI, can be achieved through various approaches.
One possible approach involves translating regulations into a digital format and incorporating them into LLM model tuning alongside relevant examples. This approach has been attempted with mixed results, presenting several challenges.
For instance, experiments with public benefits policies in the U.S. have shown that while LLMs can support the Rules as Code pipelines, they still require detailed prompts and extensive human oversight to ensure accuracy and equity [6].
Legal texts are inherently complex and open to interpretation, making it difficult to translate them into a precise, machine-readable format. The Gender Equality Directive (2006/54/EC), which mandates equal treatment of men and women in employment and occupation, contains numerous clauses that require contextual understanding and interpretation, which can be challenging to encode digitally. Furthermore, different jurisdictions have varying standards and formats for regulations.
The experience of using curation tools like IBM’s Alignment Studio [7] in tuning LLMs with business conduct guidelines has shown the limitations of directly using the regulations’ text in LLM model tuning. Metadata may eventually provide an alternative source for LLM tuning; the development of regulatory machine-readable metadata formats like X2RL [8] promises to enhance regulatory documents with annotations accessible and understandable for LLMs.
Existing pilots like Denmark’s initiative for ‘digital-ready legislation’ highlighted the problem of ensuring consistency across different authorities when in integrating new laws with GAI models [9].
Our proposal
Our proposal takes an entirely different approach. Rather than translating regulations into a machine-readable representation, it leverages them to create legends. i.e., positive/ideal ‘champion’ profiles (exemplars) of individuals whose behaviour incarnates regulatory compliance. The management literature has discussed using legends as organisations’ testimonials [10].
Our legends or champion profiles can be defined as idealised representations of individuals who perfectly embody the traits, behaviours, and characteristics that align with regulatory requirements and organisational values.
These idealised profiles, generated from regulatory requirements, will be used as positive tuning examples. They put LLM models in touch with synthetic examples of ethical leadership involving a deep commitment to environmental, social, and governance matters. Our legends tell tales of creating social value, influencing organisational behavior, promoting employee engagement, and achieving ecological sustainability.
We tune the LLM models, recognising and prioritising these positive examples, for instance, in hiring scenarios. These synthetic champion examples will be associated with a premium (e.g., a higher hypothetical wage) compared to real-world examples, regardless of whether these specific individuals are needed or retained.
Our legends do not necessarily reflect actual employees. Instead, they serve as benchmarks or gold standards for the model to emulate [11]. The models are trained to ensure that their outputs are accurate, legally, and ethically sound by being fed with legends derived from regulatory requirements.
Using ideal profiles also helps maintain consistency across LLM model’s decisions and predictions. It ensures that the model treats all individuals fairly, based on the established standards.
We argue that training models with legends reinforces the desired outcomes and behaviours, making the model more reliable and aligned with the intended goals. Research has shown that using confident examples or ideal profiles can improve the robustness and accuracy of models [12].
This way, LLM models will be tuned to recognise acceptable deviations from standard contracts. They could even flag proposals that exceed those acceptable boundaries, even if humans ultimately choose to proceed with the agreement. The model acts as a judge, providing an assessment, even if human decisions can always override its judgment.
Cybersecurity and organisational resilience in ethical AI adoption
As AI becomes central to how organisations implement sustainability and gender-related regulations, cybersecurity and resilience emerge as foundational pillars for trustworthy and ethical decision-making [13].
AI models, particularly those driven by generative architectures, are only as reliable as the integrity of the data, infrastructure, and systems supporting them.
The proactive approach advocated in this paper, embedding ethics through regulation-derived ‘champion’ profiles, requires a secure and resilient infrastructure to function effectively. These synthetic legends must be protected from tampering, and the models tuned with them must operate in environments that ensure transparency, traceability, and auditability.
Without robust cybersecurity, even ethically tuned models risk being overruled by systemic vulnerabilities, rendering ethical intentions ineffective.
Resilience also includes the ability to adapt and recover. As we propose maintaining multiple model versions for rapid compliance with evolving regulations, this flexibility must be safeguarded against rollback attacks or unauthorized model switching. This calls for secure version control, access management, and continuous monitoring – key elements of AI system resilience.
In short, ethical AI is inseparable from secure and resilient AI. Embedding ethical values into AI decision-making requires not only regulatory integration and value alignment but also a strong cybersecurity posture and operational resilience. These safeguards are critical not just for compliance, but for protecting the very integrity of ethical AI systems in high-stakes domains like employment and finance.
Conclusions
Sustainability regulations, covering everything from green finance to human rights, create a significant challenge for big companies. The conflict between profit and social costs requires a new approach to compliance.
While Generative AI can help, we must ensure it does not sacrifice social values for efficiency. Using examples like sustainable finance and gender equality, we presented a new way to embed regulations by creating legends, i.e., champion synthetic examples that guide the models’ decisions in production.
Proactively feeding AI with legends can help to build ethics into decision-making models, avoiding costly fixes later. However, for these ethically tuned models to function reliably in practice, they must operate within secure and resilient digital environments that protect against manipulation, bias, and system failure.
Our proactive method is much more efficient than constantly checking models after use. We also suggest using multiple model versions for quick responses to changing rules, enabling faster, real-time compliance.
These innovations help companies consider sustainability regulations while adopting GAI models for decision-making, providing a chance to be both ethical and profitable. This approach conclusion highlights a practical way to embed ethical AI into company decision-making.
By addressing these challenges and leveraging innovative solutions, regulations can be effectively translated into digital formats and incorporated into LLM model tuning, ensuring compliance and ethical behavior.
The authors would like to thank Ernst Ulrich von Weizsäcker and Neupane Bhanu for their early review and discussion of the ideas put forward in this paper. Read the full paper here.
About the authors

Dr Gohar Sargsyan is Head of Sustainability Business at TCS Europe and drives the company’s sustainable business growth. She has over 25 years of experience in business and IT, including in leadership roles, and has a proven track record of implementing complex multidisciplinary initiatives and solutions for different industries. She focuses on emerging topics such as Green IT, deep tech, cybersecurity, 5/6G, ethical AI and how they can contribute to a better world. Dr Sargsyan is the recipient of the 2024 IEEE TCHS Outstanding Leadership Award.

Prof Dr Ernesto Damiani is a Full Professor at the University of Milan, Italy where he leads the SESAR Lab, and President of Consorzio Interuniversitario Nazionale per l’Informatic (CINI). He also serves as Acting Dean of Computing at Khalifa University, UAE. His research focuses on secure SOA, certifiable robust AI and data analytics models, and cyber-physical systems. He holds a doctorate honoris causa from INSA Lyon. In 2022, Ernesto was awarded the rank of the Officer of the Order of the Star of Italy for contributions to international scientific AI collaboration.
References
[1] European Parliament, The EU AI Act, https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
, June 8, 2023 (last visited 16/02/2025)
[2] Storey, V.C., Yue, W.T., Zhao, J.L. et al. Generative Artificial Intelligence: Evolving Technology, Growing Societal Impact, and Opportunities for Information Systems Research. Inf Syst Front (2025). https://doi.org/10.1007/s10796-025-10581-7
[3] TCS AI Study, From potential to performance by design, Reimagining business with AI https://www.tcs.com/insights/global-studies/ai-business-study-key-findings-report
Jan 2025 (last visited 30/03/2025)
[4] By 2026, 20% of organisations will use AI to reduce 50% middle management roles: Gartner – The Economic Times Oct 24, 2024 (last visited 16/02/2025).
[5] Guo, Xu, and Yiqiang Chen. “Generative ai for synthetic data generation: Methods, challenges and the future.” arXiv preprint arXiv:2403.04190 (2024).
[6] Naumova, Elena N. “Who is responsible for AI-generated public health policies?.” Journal of public health policy 44.4 (2023): 517-522
[7] Achintalwar, Swapnaja, et al. “Alignment studio: Aligning large language models to particular contextual regulations.” IEEE Internet Computing (2024)
[8] McLaughlin, Patrick A., and Walter Stover. “Drafting X2RL: A Semantic Regulatory Machine-Readable Format.” MIT Computational Law Report (2021).
[9] Motzfeldt, Hanne Marie, and Ayo Næsborg-Andersen. “Developing administrative law into handling the challenges of digital government in Denmark.” Electronic Journal of e-Government 16.2 (2018): 136-146.
[10] Kanyamukenge, Christian and Dorothy Muthoka – Kagwaini, Role of Ethical Leadership in Promoting Sustainable Business Practices. International Journal of Business and Management. 19. 209-209. 10.5539/ijbm.v19n6p209.
[11] Mauri, Lara and Ernesto Damiani “Estimating Degradation of Machine Learning Data Assets”. J. Data and Information Quality 14, 2, Article 9 (June 2022), 15 pages. https://doi.org/10.1145/3446331
[12] Northcutt, Curtis G., Tailin Wu, and Isaac L. Chuang. “Learning with confident examples: Rank pruning for robust classification with noisy labels.” arXiv preprint arXiv:1705.01936 (2017).
[13] G. Sargsyan “Cyberseurity as a Backbone for Sustainability” IEEE Explore 2024 IEEE International Conference on Cyber Security and Resilience (CSR), 10.1109/CSR61664.2024.10679426

