The paper examines the organizational and managerial implications of artificial intelligence for the reintegration of people with disabilities into the workplace, with a legal focus on civil liability for damage caused to third parties by the assistive tool. It also examines the crossing of boundaries between man and machine, exploring the benefits and risks of AI-based technologies.
1. Introduction: Posthuman, Artificial Intelligence, and Organizational Studies
The integration of AI-based technologies in work contexts represents a crucial node in the debate on organizational studies and the focus on the specificities of the worker. In particular, these developments highlight how the possible adoption of AI to support people with disabilities should not be understood solely as a technological and/or legal issue, but also as an organizational challenge to overcome barriers to participation. In this regard, it is crucial to consider the organizational ecosystem in which new technologies operate and to understand the role of businesses, institutions, and communities in facilitating the accessibility and integration of AI-based solutions.
In recent years, the increasing adoption of AI-based technologies and the proliferation of digital ecosystems have profoundly changed the world of work. These developments are part of a broader technological revolution in which innovation has become the core of global economic dynamics, triggering significant changes in sectors such as finance, healthcare, and education (Berman, Cano-Kollmann, & Mudambi, 2022). Artificial intelligence technologies have shown tremendous potential to optimize human capabilities, enabling people with disabilities to actively participate in the workforce through the use of advanced assistive devices such as brain-computer interfaces (Metzger et al., 2023). These innovations are part of an innovation ecosystem that includes not only business and government, but also academia and civil society, according to the “quadruple helix” model (Carayannis & Campbell, 2009).
However, the integration of these technologies is not without its challenges. As Stahl and Wright (2018) point out, the implementation of artificial intelligence and big data raises important questions about compliance with existing liability laws for harm caused by “AI-based” assistive tools. In particular, the use of AI to enhance the communicative and cognitive abilities of persons with disabilities could risk fostering new forms of inequality if it is not ensured that such technologies are accessible to all and developed with respect for human dignity. Other criticisms relate to the criteria of “performance” and productivity that these technologies can promote, with the risk of applying ableist standards that ignore slowness and reflexivity as intrinsic communicative values.
Ultimately, the digital innovation ecosystem, which includes models such as the ‘Triple Helix’ (Leydesdorff, & Etzkowitz, 1998) and the ‘Quintuple Helix’ (Carayannis, Barth, & Campbell, 2012), represents not only an extraordinary opportunity to improve the employment inclusion of people with disabilities, but also an area of critical debate regarding equity, safety, and technological accessibility. The “triple helix” model emphasizes collaboration between academia, industry, and government to foster innovation and technological development. However, the integration of AI systems in the workplace requires a further level of interaction with civil society and the natural environment, leading to the configuration of the ‘quintuple helix’.
A central issue in today’s debate is the redefinition of subjectivity and work identity through sociomateriality. Gherardi (2024) points out how in today’s work practices, the collaboration between human and non-human, as also described by Orlikowski (2007), continuously redefines the subjectivity of workers in increasingly post-human organizational environments. In these contexts, technologies become essential elements in the performance of work activities, fusing human and artificial capabilities into a single operational entity. This fusion process leads to the creation of new ways of understanding the human body and work identity, which are no longer seen as fixed or determined, but as fluid and open to continuous modification and adaptation. In fact, sociomateriality reflects the need to consider the worker as a dynamic hybrid of social, material, and technological factors, a concept that is particularly relevant when exploring AI-based assistive technologies.
In the context of this hybridized subjectivity, the integration of AI technologies into cognitive and organizational processes has recently become central. In the context of work practices invested by the posthuman, Gherardi (2024) again argues how AI-based assistive technologies can enable workers with disabilities to participate fully in working life, redefining not only work practices but also the criteria for inclusion. It is no longer a matter of compensating for a lack, but of enhancing existing capabilities. The AI-assisted worker thus becomes an example of post-human subjectivity, in which distinctions between human and artificial capabilities become increasingly irrelevant.
In the context of organizational studies, the integration of AI and new technologies redefines the concept of worker and efficiency. Traditional models of performance evaluation based solely on biological and cognitive abilities must be revised to account for the interactions that characterize the new work environment. Again, Gherardi (2024) examines how these dynamics are reshaping the role of the worker in relation to the technologies with which he or she interacts, emphasizing that the introduction of improved technologies must not lead to new forms of inequality.
This last section, which deals with the redefinition of the concept of worker and efficiency in relation to AI-based assistive technologies, is essential for understanding how the organizational changes envisaged need to be framed not only in a socio-material perspective but also within an appropriate legal framework. This raises crucial questions about whether such transformations should take place within the confines of pre-existing legislation or whether specific regulation is needed to protect the rights and needs of people with disabilities using these technologies and to ensure inclusion, safety, and justice. The section on law focuses exclusively on the analysis of the more controversial Italian legal landscape, since, as will be discussed below, an adequate proposal for a supranational directive on non-contractual civil liability for damage caused by AI already exists.
2. Case Study: BCIs and the Employment Integration of People with Disabilities
The evolution of brain-computer interfaces is radically transforming the way people with severe motor or sensory disabilities can interact with the world. One of the most promising developments in this field is the use of neuroprostheses to decode brain signals and allow those who have lost the ability to speak or express themselves in traditional ways to regain fluid communication. Consider a recent empirical study in which, thanks to the BCI interface, the participant is able in real time to control and animate a virtual avatar, which simulates facial expressions and speaks (Metzger et al., 2023).
2.1. Participant profile
The participant is a forty-seven-year-old woman who, as a result of a stroke that affected her brainstem, is tetraparetic, lacking the ability to move her upper limbs and vocalize sounds. Before the implantation of the neuroprosthesis, her only means of communication was a technological head-tracking device that allowed her to type and form sentences at approximately fourteen words per minute.
With the use of the neuroprosthesis, the participant was able to increase her rate of communication to seventy-eight words per minute, giving her greater fluency and immediacy. This increase in speed not only improved the quality of daily social interactions but could also have a significant impact on the workplace in the near future. In fact, the ability to communicate more fluently allows her to potentially be more involved, overcoming one of the main barriers that people with severe disabilities face in an organizational context: the slowness and difficulty of transferring information.
2.2. Avatar customisation
One of the most innovative aspects of the study concerns the use of a virtual avatar, which is ‘controlled’ by brain signals decoded by the neuroprosthesis. The participant was able to choose the appearance of her avatar from several predefined options. This avatar does not merely represent the patient, but reproduces facial expressions in real time, which are then synchronized with the words.
2.3. Critique of the Ableist Concept of Communicative Performance
A relevant point of criticism, which emerged in the Nature study, is the emphasis on speed as a parameter of success for neuroprosthesis. Raising the participant’s communicative capacity from fourteen words per minute to seventy-eight may seem like significant progress, but this emphasis on speed reflects an ableist assumption about communicative ‘performance’. The push to ‘normalise’ disabled communication, bringing it closer to neurotypical standards, raises ethical questions about how technology defines what is considered ‘desirable’ or ‘acceptable’ in communication. The attempt to conform the participant to a fast and fluid communication model may ignore the intrinsic value of slower communication, which might include deeper reflection or greater attention to detail.
2.4. The disabled person as a post-human subject?
In light of these considerations, can we perhaps define the person with disabilities who uses neuroprosthetics as a posthuman subject? According to Braidotti (2018), the posthuman subject is a relational entity that transcends traditional mind-body and nature-culture dichotomies and integrates technology as an essential part of one’s existence. In the case of the participant, the neuroprosthesis is not simply an external device, but becomes an integral part of her body and identity.
The choice of avatar and the use of the neuroprosthesis allow the participant to expand her communicative capacities, transforming her into a subject that exists on the border between biology and technology.
Technology, according to Pianezzi, is understood as the second nature of the human being, “[…] divorced from the ontological monism that requires the egalitarian view of being” (Pianezzi, 2022, p. 165). In questioning the morality of certain organizational actions (in this example, the adoption of technology based on the use of AI to support the communication of the severely disabled participant), it is necessary to ensure that such actions do or do not comply with an inescapable moral principle, namely the denial of a definite hierarchy. In fact, observing the phenomenon through the lens of post-human ethics places radical equality at the center, and “[…] is not based on the extension of human rights to non-humans, but on the denial of a moral and ontological hierarchy between species, and between species and nature, understood as animal life (zoe) and earth (geo)” (Pianezzi, 2022, p. 168).
This empirical case shows how technology applied to the case at hand can be a double-edged sword: on the one hand, it enables new forms of expression and autonomy in relational and professional domains; on the other hand, it risks perpetuating abilist logics and exposing users with disabilities to real gaps in protection, given the legal framework currently in place for cases of harm caused to third parties by the malfunctioning assistive tool.
2.5. Reasonable Accommodation in Organizational Ecosystems. A new norm to respond to emerging subjectivities?
Attention to the specificities of the disabled worker, as in the present case, should not be understood only as a technological issue, but also as an organizational challenge aimed at overcoming obstacles to participation. Therefore, the adoption of reasonable accommodations in organizational ecosystems is essential to promote the overcoming of the aforementioned obstacles, as on the other hand the most recent regulatory guidelines suggest.
In this sense, the potential use of “AI-based” technologies in the work of people with disabilities raises important legal issues, particularly with regard to civil liability in the event of harm. The integration of devices such as neuroprostheses into work activities, while an important technological advance, highlights legal gaps that need to be filled to ensure effective protection of rights. Existing legislation, in particular Articles 2043 and 2050 of the Civil Code, may provide a basis for addressing the risks associated with the use of AI, but may prove insufficient in the face of complex scenarios. In other words, the increasing decision-making autonomy of AI may require a rethinking of the legal framework so that the specificities of AI technologies can be considered in the context of disabilities, ensuring inclusion, safety and justice for the workers involved.
3. Who is civilly liable in case of damage caused by “AI based” technology applied to support people with disabilities? Ad hoc legislation to protect newly employable work units?
Authoritative doctrine, precisely with regard to torts related to the use of AI, has pointed out the unfavorable tendency to automatically promote a new norm for each new tortious phenomenon: the jurist cannot limit himself to applying the norm to the exact case outlined by it, rather using interpretation as the main tool at his disposal (Finocchiaro, 2019). Therefore, only as an extreme ratio could the construction of ad hoc legislation be envisaged.
In the rarer hypothesis in which the provider of the “AI-based” service and the user with a disability are bound by a contractual relationship, the attribution of civil liability for possible malfunctioning of the system appears to be unproblematic: it tends to be the provider who is liable for non-performance or inexact performance of contractual obligations. However, it cannot be ruled out that contractual liability may exist jointly with non-contractual liability (so-called concurrent liability) (Ferrari & Lusardi, 2019).
As for the latter, an initial theoretical orientation that envisaged the possibility of resorting to Article 2049 of the Civil Code on the employer’s liability for the tort of “servants and clerks” is to be discarded today. In fact, in such a case the employer is liable for the act of a person who is abstractly liable, and proof of this is precisely the fact that the employee is liable for the damage caused jointly and severally with the employer. And yet, it has long been the case that the most authoritative Italian doctrine has moved beyond the thesis of even partial subjectivity (Teubner, 2019) of the AI system: “[…] it seems very doubtful that the search for a solution must pass through the recognition of a subjectivity of the software agent, albeit partial. Indeed, if the recognition of a full legal subjectivity appears […] a ” forcing” that does not take into account the current reality, the same can be said of partial subjectivity […]” (Perlingieri, 1991, pp. 325-326).
As for indirect liability, the most accredited objective liability or aggravated liability, depending on the doctrinal classification to which reference is to be made, some authors had also envisaged an original reference to Article 2048 of the Civil Code, containing provisions on the liability of parents, guardians and tutors, who are liable for the fact caused by the minor, the subject under guardianship, the pupil or the apprentice. In this sense, it is pointed out that the AI system reacts solely on the basis of the education given to it. The thesis-which remains suggestive, however-is bound to break down against some limitations: AI, unlike the “child”, is not characterized by any capacity for self-determination; the most recent innovations in machine learning, deep learning, and generative AI adopt an unsupervised approach, and thus reduce the “training” phase to a minimal level of initial programming (Ferrari, & Lusardi, 2019). A similar argument could be made in relation to the former Article 2047 of the Civil Code.
Likewise, the reference to Article 2052 of the Civil Code on liability for damage caused by animals should be rejected, since, through domestication, the owner is called upon to carry out a control over the animal’s ability to react that is lacking in the case of AI. In fact, the owner, custodian, or user of the “AI-based” device has a minimal possibility of affecting the conduct of the system and, as a rule, does not even adequately know the operating mechanisms.
The use of AI could also give rise to a dangerous activity under Article 2050 of the Civil Code. The shrewdest doctrine, however, now argues that this connotation is improper, and this is because AI is not inherently dangerous and should be reasonably qualified as a symbol of a more reliable technique than man, a corrective means of human inaccuracies (D’Alfonso, 2022). Conversely, it cannot be ruled out that the provision is applicable when AI is used for the performance of an activity that is dangerous in itself.
So, for non-hazardous activities, Article 2051 of the Civil Code on things in custody, which excludes liability for fortuitous events alone, would apply, while the less stringent Article 2050 of the Civil Code, which provides for the liberating proof of having done everything possible to avoid the damage, would apply only to activities that are per se hazardous. Consistently, it has been said that Art. 2050 c.c. would apply when the damage is caused by the thing subjected to the direction, albeit inadequate, of a person, while Art. 2051 c.c. would find application in cases where the thing was not directly operated by the operator (Serrao, 2021).
Other interpreters have asserted that Article 2051 of the Civil Code places the focus exclusively on the so-called inanimate res, that is, on an element that appears far removed from the IA instrument, which, by its very nature, is capable of producing behavior and decisions. Moreover, the very notion of “janitor” would be unsuitable when referring to an individual who, in fact, is not always concretely able to control the device in the full sense. Nevertheless, it is thought that Article 2051 of the Civil Code, in its meaning of strict liability, still remains applicable when the device is the cause of the injury as a direct source of the damage (and not as a means of the autonomous conduct of the owner/user/custodian) (D’Alfonso, 2022).
In conclusion, many of the rules mentioned above are applied by analogy, and thus the final entry into force of a European regulation on the subject will likely limit their application to only residual cases. Finally, it should be noted that among the proposals for regulations that the European Parliament had approved on October 20, 2020, in particular in one of them – namely A9-178/2020 – there was the prospect of compulsory insurance for operators to cover civil liability, adjusted to the amounts and the size of the compensation referred to in Articles 5 and 6 of the proposal, except where the activity carried out is not already subject to a compulsory insurance scheme under another EU or national law or to voluntary company insurance funds (Serrao, 2021).
For all the reasons stated so far, we believe there is a time-rooted regulatory gap in relation to the present case. To address it, it is worth insisting on the need for the legislature to construct ad hoc legislation to protect those who use (or, as in the case of the person with disabilities who “regains” his or her working autonomy, are often forced to use) “AI based” technology, precisely in anticipation of its possible malfunction. It will be up to the Italian legislature to structure the new law text, and to do so in the wake of the not insignificant organizational-relational dynamics outlined above, as well as, on the subject of non-contractual liability, of the Proposal for a Directive of the European Parliament and of the Council on liability from Artificial Intelligence (COM/2022/496 final).
In this framework, the extension of the organizational model of the “triple helix” to the “quintuple helix” implies the need to regulate, from a legal point of view, relationships that no longer refer only to companies, academic institutions and public administrations, but also to those that involve the direct participation of society in decision-making and the environment. In other words, civil liability for damages caused by “AI-based” assistive technologies designed to support people with disabilities should be considered not only in a logic of state regulation and innovation, but also in light of the ethical and social demands raised by civil society and the environment in which it operates.
4. Concluding remarks.
“AI-based” technologies represent a great opportunity for the employment reintegration of people with disabilities, offering new tools for reducing the production gap and the efficient and productive employment reintegration of people with disabilities, as well as the formulation of real “new” ways of working.
However, alongside these potential benefits, it is essential to critically consider the possible legal implications, as well as the risk of perpetuating ableist logics that exalt speed and performance while penalizing aspects such as reflection and slowness, which can be intrinsic values of communication. Resolving these critical issues requires a balanced managerial approach that not only promotes work efficiency, but also the quality of interactions and respect for human dignity.
Precisely on the subject of respect for human dignity, and thus also protection of the rights of people with disabilities as set forth in the 2006 UN Convention, it is crucial to address the issue of civil liability in the event of damage caused to third parties by assistive technologies. Current regulations may be inadequate to handle complex scenarios related to the use of AI, making the introduction of ad hoc regulation urgent. Only by ensuring a sound and inclusive legal framework will it be possible to fully exploit the benefits of AI technologies without compromising the rights of people with disabilities and, consequently, the safety of third parties.
References
Berman, A., Cano-Kollmann, M., & Mudambi, R. (2022). Innovation and entrepreneurial ecosystems: Fintech in the financial services industry. Review of Managerial Science, 16(1), 45-64.
Braidotti, R. (2018). A theoretical framework for the critical posthumanities. Theory, Culture & Society, 36(6), 31-61.
Carayannis, E.G., & Campbell, D.F.J. (2009). “Mode 3” and “Quadruple Helix”: Toward a 21st century fractal innovation ecosystem. International Journal of Technology Management, 46(3/4), 201.
Carayannis, E.G., Barth, T.D., & Campbell, D.F. (2012). The Quintuple Helix innovation model: Global warming as a challenge and driver for innovation. Journal of Innovation and Entrepreneurship, 1(1), 2.
D’Acunto, L., Orlando, L. (2024). Disabilità e tecnologie assistive “AI based”: questioni di interesse civilistico. In Dell’Aversana, F., Mollo, A., Napolitano, D., Sicca, L.M. (Eds.), Su la disabilità, Naples, Editoriale Scientifica, 303-319.
D’Alfonso, G. (2022). Intelligenza artificiale e responsabilità civile. Prospettive europee. Revista de Estudios Jurídicos y Criminológicos, 6, 163-195.
Ferrari, A., Lusardi, G. (2019). Che responsabilità possono derivare dal malfunzionamento dei sistemi di AI? In AA.VV., Come preparare la propria azienda alla digital revolution. Opportunità, obblighi e rischi dell’intelligenza artificiale, Milan, Wolters Kluwer Italia, 73-90.
Finocchiaro, G. (2019). Il quadro di insieme sul Regolamento europeo sulla protezione dei dati personali. In Finocchiaro, G. (Eds.), Il nuovo regolamento europeo sulla privacy e sulla protezione dei dati personali, Bologna, Zanichelli.
Gherardi, S. (2024). Sociomaterialità e il lavoratore postumano. In F.-X. de Vaujany, F.-X. de, Gherardi, S., & Silva, P. (Eds.). (2024). Organization Studies and Posthumanism: Towards a More-than-Human World. Routledge.
Leydesdorff, L., & Etzkowitz, H. (1998). The Triple Helix as a model for innovation studies. Science and Public Policy.
Metzger, S. L., Littlejohn, K. T., Silva, A. B., Moses, D. A., Seaton, M. P., Wang, R., … & Chang, E. F. (2023). A high-performance neuroprosthesis for speech decoding and avatar control. Nature, 620(7976), 1037-1046.
Orlikowski, W.J. (2007). Sociomaterial Practices: Exploring Technology at Work. Organization Studies, 28(09),
Perlingieri, P. (1991). Il diritto civile nella legalità costituzionale, Naples, ESI.
Pianezzi, D. (2022). Corpi (dis)organizzati: etica, lavoro e organizzare femminista. Editoriale Scientifica (p. 165-168).
Serrao, P. (2021). La responsabilità civile per l’uso di sistemi di intelligenza artificiale nella Risoluzione del Parlamento europeo 20 ottobre 2020: “Raccomandazioni alla Commissione sul regime di responsabilità civile e intelligenza artificiale”. Giustizia Insieme.
Stahl, B.C., & Wright, D. (2018). Ethics and Privacy in AI and Big Data: Implementing Responsible Research and Innovation. IEEE Security & Privacy, 16(3), 26-33.
Teubner, G. (2019). Soggetti giuridici digitali? Sullo status privatistico degli agenti software autonomi. In Femia, P. (Eds.), Naples, ESI.

