What role(s) for Public Institutions in Emerging AI Ecosystems? Co-Designers, Coordinators and Promoters of Technology Transfer

This paper proposes a new framework for the governance of emerging AI ecosystems, reimagining public institutions as active co-designer, coordinators and promoters of technology transfer rather than mere regulators and supervisors. By providing this ecosystemic perspective, this work supports a cohesive and responsible path for AI development and diffusion.

1. Introduction: What Role(s) for Public Institutions in Governing Emerging AI Ecosystems?

The rapid evolution of Generative Artificial Intelligence (AI) has triggered significant organizational, legal, and ethical concerns (Anderljung et al., 2023; Dwivedi et al., 2021; Robles & Mallinson, 2025; Wirtz et al., 2020). Such challenges underscore crucial questions about the role of public institutions (like governmental bodies) in shaping the governance of this fast-changing ecosystem (De Almeida et al., 2021)

Following Jacobides et al. (2021), we refer to AI ecosystems as complex, evolving systems of organizations, institutions, resources, and socio-technical arrangements that enable, produce, and apply artificial intelligence solutions. We refer to AI governance as the set of regulatory, policy, institutional, and design mechanisms—formal and informal—used to guide, constrain, or facilitate the development, deployment, and societal impacts of AI technologies, in order to maximize benefits and minimize risks across diverse domains (Taeihagh, 2021)

In the context of AI ecosystems governance, public institutions’ role has primarily focused on a dichotomy of rule-setters – illustrated by the proposed EU AI Act  (Lanamà et al., 2024) – and supervision, as seen in the Italian temporary suspension to treat user data via ChatGPT (Bolici et al., 2024b). While these measures demonstrate a commitment to oversee AI development and diffusion, they tend to be reactive rather than proactive, and thus lag behind fast-evolving AI capabilities (Robles & Mallinson, 2025). Accordingly, current approaches to AI ecosystem governance may undermine long-lasting policy effectiveness (Bolici et al., 2024b, 2024a; Gualdi & Cordella, 2024), calling for the design of more proactive, systemic and integrated strategies.

To bridge this gap, we propose a network-based framework that encourages public institutions to move beyond supervising and regulating, becoming co-designers, coordinators, and promoters of technology transfer within AI ecosystems. Drawing from the Triple Helix model (Leydesdorff & Etzkowitz, 1998) and New Public Governance (Osborne, 2010), this framework promotes collaboration among public institutions, industry, and academia to support commercialization, foundational research, and regulatory legitimacy, thus driving responsible innovation (Hong et al., 2019; Wirtz et al., 2020). This framework was developed within the context of the Rome Technopole Project. Aligned with its goals, the framework supports responsible AI development and fair access to technological progress. Our dual approach to technology transfer reflects the Rome Technopole’s mission to: i) advance Technology Readiness Levels (TRLs), helping move academic research into real-world industrial use; and ii) involve a wide range of stakeholders and build strong communities around key areas of innovation.

By positioning public institutions as co-designers, coordinators, and technology transferors, our framework offers a practical model for implementing the Technopole’s vision of sustainable innovation and industrial revitalization in the domain of AI systems.

2. Background: The Evolving Role of Public Institutions

The role of public institutions has progressively aligned with contemporary models of public administration, which emphasize greater interaction and cooperation with third-party stakeholders (Bryson et al., 2014). This evolution has traversed three distinct paradigms: Public Administration (PA), New Public Management (NPM), and New Public Governance (NPG) (Osborne, 2010).

Public Administration, grounded in political science, emphasized centralized decision-making with a top-down approach. In this model, policy creation and implementation were primarily the responsibilities of public officials within a unified state structure. However, criticisms of PA’s rigid structure led to the emergence of New Public Management. NPM applied principles of neo-classical economics, favoring competition and efficiency through market mechanisms and contractual relationships. This approach treated public services as products managed within an open system, emphasizing organizational performance. As the complexity of public service needs grew, New Public Governance emerged, reflecting a pluralistic approach where multiple actors, including networks of organizations, collaboratively shape service delivery. NPG focuses on inter-organizational relationships and the negotiation of diverse values, addressing the limitations of PA and NPM in a more fragmented, interconnected governance landscape (Osborne, 2010). This understanding is driving a transformation in public sector reform, grounded in fundamentally different theoretical and epistemological foundations than those of traditional public administration or NPM. Central to this shift are deliberative, inclusive decision-making processes, collaboration, and co-production as key mechanisms for generating public value (Stoker, 2006).

The NPG paradigm inherently resonates with two central network-based concepts. The first is the Triple Helix model, which elucidates the pivotal role of the interplay between diverse actors – such as academics, industrial players, and governmental institutions – in leveraging innovation and development (Leydesdorff & Etzkowitz, 1998). The second is the concept of ecosystems, representing dynamic and interconnected networks of diverse actors that coexist and interact within a shared environment (Carayannis & Campbell, 2009; Haythornthwaite, 1996)

In this work, we focus on AI ecosystems: complex, evolving systems of organizations, institutions, resources, and socio-technical arrangements that enable, produce, and apply artificial intelligence solutions (Jacobides et al., 2021). These actors—ranging from hardware providers and platform developers to end users, government and research institutions —are interlinked by shared data flows, R&D collaborations and market interactions. Public institutions can significantly impact AI ecosystem governance by orchestrating the interactions among the many actors involved. AI governance refers to the set of regulatory, policy, institutional, and design mechanisms—formal and informal—used to guide, constrain, or facilitate the development, deployment, and societal impacts of AI technologies, in order to maximize benefits and minimize risks across diverse domains (Taeihagh, 2021). By integrating these theoretical perspectives, this work revises the role of public institutions within emerging AI ecosystems, qualifying them as co-designer, coordinator and promoter of technology transfer. 

3. Framework and implications: Rethinking Public Institutions’ Role(s) in AI Ecosystems

Recognizing the pivotal role that public institutions play in guiding the development and diffusion of responsible AI systems, this work reexamines their place within emerging AI ecosystems. In this section, we first analyze the current role of public institutions (section 3.1), with a focus on their regulatory and supervisory responsibilities, and then propose a new framework that upgrades them (section 3.2).

3.1 Current Role and Associated Limitations

Currently, AI ecosystems predominantly place public institutions in two reactive roles: as regulators drafting rules (e.g., the proposed EU AI Act) and as supervisors enforcing compliance (Bolici et al., 2024b; Lanamà et al., 2024). While this structure ensures some legal oversight, scholars note several key drawbacks. First, the pace of Generative AI innovation outstrips that of traditional legislative processes, resulting in “governance lag” where nascent technologies remain unsupervised (Birkstedt et al., 2023; Wirtz et al., 2020). Relatedly, many interventions occur only after harms have materialized, reflecting a broader pattern of reactive governance (Wang & Siau, 2018; Wirtz et al., 2022). Such an approach can perpetuate risks, as problems like algorithmic bias or data misuse may escalate before institutions can respond (Robles & Mallinson, 2025). Second, by confining themselves to rule-setting and enforcement, public authorities often miss opportunities to shape AI solutions during early development – where integrating ethical standards and public-interest objectives is more effective (Anderljung et al., 2023). This oversight may leave non-commercial interests, including those championed by academia, underrepresented (Hong et al., 2019). Consequently, existing governance structures risk failing to capture the full collaborative potential among public agencies, private developers, and research institutions, undermining the proactive, integrated AI governance many scholars advocate (Dwivedi et al., 2021).

To overcome the limitations of current approaches, we propose a more proactive and integrated role for public institutions in the AI ecosystem – one that facilitates collaboration among stakeholders, embeds ethical considerations from the earliest stages of development, and creates governance mechanisms that can evolve as rapidly as the technology itself.

3.2 The Role of Public Institutions Rethought

A robust, multi-actor collaboration – supported by proactive public institutions – is indispensable for adaptive, responsible Generative AI governance. Our framework merges NPG and Triple Helix principles to foster co-creation in Generative AI ecosystems, with AI developers, academia, and public institutions collaborating on design, testing, and responsible AI deployment (Birkstedt et al., 2023; Dwivedi et al., 2021)

We propose three key roles for public institutions – co-designer, coordinator, and technology transferor – that together ensure public values remain central (Robles & Mallinson, 2025; Wirtz et al., 2020). As co-designers, governments can shape policies and standards, bridging the “principles-to-practices” gap through transparent, inclusive processes (Hadfield & Clark, 2023). As coordinators, they balance efficiency with ethical oversight, orchestrating diverse stakeholders to prevent AI misapplications (Hong et al., 2019). As promoters of technology transfer, they can foster research funding, open data infrastructures, and equitable access, particularly for under-resourced innovators (Dilling et al., 2020; Wirtz et al., 2020)

Figure 1: Current AI Ecosystem (left) vs. Proposed Framework (right)

3.2.1 Public Institutions as Co-Designers

Under NPG, public institutions are not merely rule-setting bodies, but co-designers that shape the strategic direction of AI in partnership with academia and industry. Thus, co-designer roles empower public institutions to ensure that AI models address societal challenges (e.g., equitable healthcare, fair lending) while remaining legally and ethically robust. This “upstream” engagement reflects the success factors found in triple-helix collaborations, where universities, government, and industry align on early-stage research goals (Hong et al., 2019). In practice, this has the following implications for AI systems design and development:

  • Joint Development: Strategic Vision and Agenda-Setting: Public agencies work with AI developers and researchers to define overarching goals (e.g., “responsible by design,” “human-centric AI”). Similar to “society-in-the-loop” approaches (Birkstedt et al., 2023), public institutions embed societal priorities – such as algorithmic fairness, data privacy, or climate goals – into early development phases.
  • Proactive Participation in Research & Development: Public-sector experts or agency representatives can serve on coordinators and project steering committees to ensure that ethical and legal considerations are built into new AI models from the outset. For instance, the notion of “collaborative governance” (Wirtz et al., 2020) highlights the importance of involving public institutions in R&D settings, thus mitigating risk ex ante.
  • Democratization and Legitimacy: By participating in design sprints, pilot programs, and prototyping exercises, public institutions help channel diverse stakeholder interests – including underrepresented community perspectives – into AI solutions (Dwivedi et al., 2021). This inclusive approach bolsters public trust (Robles & Mallinson, 2025), bridging the “democratic deficit” often noted when AI systems are solely industry-driven (Hadfield & Clark, 2023).

Within the co-design cycle, public institutions also act as coordinators (see section 3.2.2) and facilitators of In-the-loop technology transfer (see section 3.2.3).  

3.2.2 Public Institutions as Coordinators

Coordination is the management of interdependencies among activities (Malone & Crowston, 1990) and, indirectly, of the relationship among the actors in charge of their implementation. As introduced in section 3.2.1, public institutions can take on a coordinator role to harmonize and streamline multi-stakeholder collaboration. In essence, the coordinator function recognizes that public institutions do more than passively watch AI innovation unfold. They orchestrate, moderate, and incentivize collaborative norms, channeling the effort of numerous and heterogeneous actors toward common public-interest outcomes (Dwivedi et al., 2021). These coordination role involves:

  • Bridging Regulatory and Technical Dialogues: Public agencies occupy a pivotal position between formal regulation (e.g., data-protection or transparency laws) and fast-evolving AI technologies. As coordinators, they translate broad legislative objectives – such as fairness, non-discrimination, or privacy – into practical guidelines for AI developers. Mäntymäki et al. (2022) propose that effective AI governance depends on bridging corporate governance, IT governance, and data governance; public authorities can actively synchronize these domains.
  • Mitigating Fragmentation and Fostering Convergence: Drawing on Hong et al. (2019), which shows that academia–industry collaboration weakens regional innovation divergence, public institutions can replicate this effect at a broader scale by coordinating resources and setting uniform standards. Central agencies, for instance, can diffuse best practices or unify licensing requirements, reducing duplication and fostering nationwide or international convergence in AI capabilities.
  • Promoting Public Trust and Acceptance: As third-party brokers among competing interests, government bodies can set up oversight processes – such as auditing frameworks, risk assessments, or algorithmic impact assessments – that ensure accountability and transparency. When the public sees robust oversight, trust in AI-driven public services tends to increase (Robles & Mallinson, 2025).

3.2.3 Public Institutions as Promoters of Technology Transfer

Technology transfer (TT) is the movement of know-how, technical knowledge, or technology from one organizational setting to another (Bozeman, 2000). Public institutions can facilitate the TT process, promoting the uptake, diffusion and scaling of AI solutions. We distinguish here between In-The-Loop and Out-of-The-Loop TT.

In-The-Loop TT unfolds within the co-design process, where active knowledge exchange happens among researchers, industry practitioners, and public-sector agencies. As new breakthroughs emerge, they are quickly translated into industrially relevant applications. Public institutions play a pivotal role by:

  • Funding and Demonstration: Supporting applied research programs and pilot projects (Hong et al., 2019) that test generative models in controlled but realistic environments.
  • Test Beds and Knowledge Hubs: Offering neutral platforms where cutting-edge solutions are validated under real-world constraints before wider release (Dilling et al., 2020)
  • Matchmaking: Connecting academics, startups, and established firms to ensure knowledge and technologies flow more easily across organizational boundaries.

This In-The-Loop mechanism promotes mutuality among actors within AI ecosystems: researchers gain early insight into practical constraints (e.g., data security, real-time requirements). Industry secures privileged access to new academic breakthroughs, thus reducing R&D time and de-risking innovation. Public Sector fosters solutions that align with social values and are validated in real-world contexts.

Out-of-The-Loop TT addresses recipients outside the core co-design cycle – often smaller enterprises, nonprofits, local governments, or community actors less directly engaged in ongoing R&D. As Hong et al. (2019) suggest in the context of China’s regional innovation convergence, well-structured public-led technology transfer can alleviate geographical or sectoral disparities, building a more balanced AI ecosystem. Inspired by collaborative governance models (Wirtz et al., 2020), public institutions can support Out-of-The-Loop TT by:

  • Creating AI Knowledge Hubs: Provide standardized toolkits, training modules, and use-case frameworks to aid organizations without extensive R&D capacity.
  • Subsidizing AI Adoption: Offer grants or public procurement initiatives that encourage use of vetted AI solutions, especially in under-resourced municipalities or sectors.
  • Expanding Data Access: Develop open, community-driven repositories to ensure that smaller actors can train and deploy AI systems responsibly without prohibitive costs.

3.3 Some Possible Implementation Challenges and Mitigation Strategies

While the roles of co-designer, coordinator, and promoter of technology transfer often complement one another in theory, real-world collaboration between public institutions, academia, and industry can be fraught with challenges if not carefully managed. One common issue is incentive misalignment. For instance, private-sector actors may prioritize speed to market, seeking rapid deployment of AI products, while public institutions emphasize ethical safeguards and regulatory compliance. Without structured coordination, this tension can stall progress or lead to suboptimal outcomes. Mechanisms like milestone-based contracts, routine check-ins, and oversight committees can help align expectations by tying funding and approvals to demonstrated adherence to ethical standards (Dwivedi et al., 2021)

Another challenge is lack of trust. When public agencies enforce rules through top-down mandates without explaining the rationale, they risk alienating their partners. This can lead to resistance or disengagement, undermining collaborative governance efforts. Transparency is key, and open consultation formats such as stakeholder roundtables or live-streamed policy discussions, coupled with publicly accessible explanations of major decisions, can build legitimacy and foster mutual accountability (Robles & Mallinson, 2025)

Incentive misalignment and trust issues are only two of many possible frictions in the governance of AI ecosystems. Future research is necessary to systematically identify such factors and possible mitigation strategies.

4. Conclusions

The rapid advancement of AI requires a reassessment of public institutions’ role in governing AI ecosystems, which currently struggle to keep up with technological progress, leading to reactive policies that inadequately address new challenges. This paper proposed a conceptual framework for AI ecosystem governance that addresses these shortcomings. We introduced a collaborative AI development process that brings together public institutions, academia, and industry in a co-design cycle, representing a fundamental shift from traditional hierarchical control modes to a more proactive and participatory approach.

While purely conceptual, the framework we propose aligns closely with the objectives of key development programs in Italy, such as Rome Technopole, demonstrating the potential of our proposed governance approach to address real-world challenges in fostering responsible AI development and equitable technological progress. Specifically, our dual approach to technology transfer resonates with the Rome Technopole’s vision of promoting industrial valorization of research results. The In-The-Loop Technology Transfer we propose parallels the Rome Technopole’s aim to mature Technology Readiness Levels (TRL), facilitating the transition from academic research to industrial applications. Similarly, our Out-of-The-Loop Technology Transfer concept aligns with the Rome Technopole’s goal of engaging diverse stakeholders and creating communities around smart specialization areas.

By positioning public institutions as co-designers, coordinators, and technology transferors, our framework offers a practical model for implementing the Rome Technopole’s vision of sustainable innovation and industrial revitalization in the domain of AI systems.

“Acknowledgement: This research has been partially funded by the ecosystem project Rome Technopole – CUP B83D21014170006”

References

Anderljung, M., Barnhart, J., Korinek, A., Leung, J., O’Keefe, C., Whittlestone, J., Avin, S., Brundage, M., Bullock, J., Cass-Beggs, D., Chang, B., Collins, T., Fist, T., Hadfield, G., Hayes, A., Ho, L., Hooker, S., Horvitz, E., Kolt, N., … Wolf, K. (2023). Frontier AI Regulation: Managing Emerging Risks to Public Safety (No. arXiv:2307.03718). arXiv. 

Birkstedt, T., Minkkinen, M., Tandon, A., & Mäntymäki, M. (2023). AI governance: Themes, knowledge gaps and future agendas. Internet Research, 33(7), 133–167. 

Bolici, F., Varone, A., & Diana, G. (2024a). To Ban, or Not to Ban, this Is the D(AI)lemma: An Analysis of Ecosystem Landscapes. In A. M. Braccini, F. Ricciardi, & F. Virili (Eds.), Digital (Eco) Systems and Societal Challenges: New Scenarios for Organizing (pp. 335–353). Springer Nature Switzerland. 

Bolici, F., Varone, A., & Diana, G. (2024b). Unpopular Policies, Ineffective Bans: Lessons Learned from ChatGPT Prohibition in Italy. ECIS 2024 Proceedings. 

Bozeman, B. (2000). Technology transfer and public policy: A review of research and theory. Research Policy, 29(4–5), 627–655. 

Bryson, J. M., Crosby, B. C., & Bloomberg, L. (2014). Public Value Governance: Moving Beyond Traditional Public Administration and the New Public Management. Public Administration Review, 74(4), 445–456. 

Carayannis, E. G., & Campbell, D. F. J. (2009). “Mode 3” and “Quadruple Helix”: Toward a 21st century fractal innovation ecosystem. International Journal of Technology Management, 46(3/4), Article 3/4. 

De Almeida, P. G. R., Dos Santos, C. D., & Farias, J. S. (2021). Artificial Intelligence Regulation: A framework for governance. Ethics and Information Technology, 23(3), Article 3. 

Dilling, M. B., DiSante, A. C., Durland, R., Flynn, C. E., Metelitsa, L., & Selvamanickam-, V. (2020). Formulating Industry-Academic Collaborations That Work: Best Practices to Ensure a Strong Relationship After the Agreements are Signed. Technology & Innovation, 21(2), 169–177.

Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., Duan, Y., Dwivedi, R., Edwards, J., Eirug, A., Galanos, V., Ilavarasan, P. V., Janssen, M., Jones, P., Kar, A. K., Kizgin, H., Kronemann, B., Lal, B., Lucini, B., … Williams, M. D. (2021). Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. International Journal of Information Management, 57(101994). 

Gualdi, F., & Cordella, A. (2024). Theorizing the regulation of generative AI: lessons learned from Italy’s ban on ChatGPT. Proceedings of the 57th Hawaii International Conference on System Sciences. HICSS 2024. 

Hadfield, G. K., & Clark, J. (2023). Regulatory Markets: The Future of AI Governance (No. arXiv:2304.04914). arXiv. 

Haythornthwaite, C. (1996). Social network analysis: An approach and technique for the study of information exchange. Library & Information Science Research, 18(4), Article 4. 

Hong, J., Zhu ,Ruonan, Hou ,Bojun, & and Wang, H. (2019). Academia-industry collaboration and regional innovation convergence in China. Knowledge Management Research & Practice, 17(4), 396–407. 

Jacobides, M. G., Brusoni, S., & Candelon, F. (2021). The Evolutionary Dynamics of the Artificial Intelligence Ecosystem. Strategy Science, 6(4), 412–435. 

LanamÃ, A., Väyrynen, K., & Vainionpää, F. (2024). The European Union’s Regulatory Challenge: Conceptualizing Purpose in Artificial Intelligence. ECIS 2024 Proceedings. European Conference on Information Systems.

Leydesdorff, L., & Etzkowitz, H. (1998). The Triple Helix as a model for innovation studies. Science and Public Policy.

Malone, T. W., & Crowston, K. (1990). What is Coordination Theory and How Can It Help Design Cooperative Work Systems. Proceedings of the Conference on Computer Supported Cooperative Work.

Mäntymäki, M., Minkkinen, M., Birkstedt, T., & Viljanen, M. (2022). Defining organizational AI governance. AI and Ethics, 2(4), 603–609. 

Osborne, S. P. (Ed.). (2010). The new public governance? Emerging perspectives on the theory and practice of public governance (First published). Routledge.

Robles, P., & Mallinson, D. J. (2025). Artificial intelligence technology, public trust, and effective governance. Review of Policy Research, 42(1), 11–28. 

Stoker, G. (2006). Public Value Management: A New Narrative for Networked Governance? The American Review of Public Administration, 36(1), 41–57. 

Taeihagh, A. (2021). Governance of artificial intelligence. Policy and Society, 40(2), 137–157. 

Wang, W., & Siau, K. (2018). Artificial intelligence: A study on governance, policies, and regulations. Proceedings of the Thirteenth Midwest Association for Information Systems Conference, Saint Louis, Missouri, 2018 May 17-18, 1–5.

Wirtz, B. W., Weyerer, J. C., & Kehl, I. (2022). Governance of artificial intelligence: A risk and guideline-based integrative framework. Government Information Quarterly, 39(4), 101685. 

Wirtz, B. W., Weyerer ,Jan C., & and Sturm, B. J. (2020). The Dark Sides of Artificial Intelligence: An Integrated AI Governance Framework for Public Administration. International Journal of Public Administration, 43(9), 818–829.

Autori

+ articoli

Università degli Studi di Cassino e del Lazio Meridionale; Syracuse University (iSchool)

_

_

Università degli Studi di Cassino e del Lazio Meridionale

Ultimi articoli