Essay on Ethical Concerns in AI for Social Work Research and Practice
Artificial intelligence (AI) is becoming an increasingly influential tool in social work research and practice, offering opportunities to address pressing social challenges. However, its use comes with ethical and environmental implications that demand careful consideration. Integrating AI responsibly into social work requires prioritizing equity, sustainability, and transparency to ensure these technologies serve the communities they are intended to benefit. A critical component of this ethical approach is acknowledging and addressing skepticism. Concerns about AI’s potential to cause harm or exacerbate inequities are valid. In this post, I will discuss several significant concerns in adopting AI in social work practice and research and the potential future directions.
Environmental Impacts
One of the most pressing criticisms of AI is its environmental footprint. Large web-based models require enormous computational power, leading to significant energy consumption and environmental impact, which is unequal across the regions (See this article). In contrast, smaller, locally deployed models consume fewer resources while still effectively performing tasks. These models offer a practical alternative for researchers and practitioners concerned about sustainability. For example, using smaller models reduces the carbon footprint of AI applications while maintaining the capacity to analyze data, generate insights, and assist with decision-making. This approach reflects a commitment to balancing technological advancements with environmental stewardship.
Small language models (SLMs) that can be installed on personal computers or even smartphones have shown remarkable potential over the past year (See this article by Microsoft). By leveraging these locally deployed models, users can mitigate privacy concerns and prevent issues related to data leakage. SLMs are continually advancing, and their utility is expected to grow as they are optimized with domain-specific training data for various fields of research. This tailored approach not only enhances their effectiveness but also aligns with sustainability goals by minimizing the energy requirements of AI operations.
Privacy and Confidentiality
Data privacy is central to ethical AI use in social work, where sensitive information is often involved. The National Association of Social Workers (NASW) technology and social work practice standards provide directives regarding data security in social work services and practice settings. In a society increasingly shaped by “data capitalism,” where data itself has become a commodity, hospitals often claim they have de-identified patient data while simultaneously monetizing and selling it. Similarly, social media data, once shared, becomes permanently stored in data repositories, erasing the “right to be forgotten.” Conversations with generative AI tools like ChatGPT can further exacerbate privacy risks, as data shared for analysis may be stored in their systems and potentially used for model training, creating serious concerns about data leakage.
Locally deployed small language models (SLMs) offer a way to address these data-sharing challenges, as they keep data entirely on-site and under the user’s control. This eliminates the risk of exposure to external parties and ensures compliance with confidentiality requirements. This is particularly important in social work research, where trust and the ethical handling of data are foundational principles. Adopting tools that prioritize privacy reinforces these values, ensuring that AI applications do not compromise the integrity of research or practice.
At the same time, broader systemic measures are needed. Social workers and practitioners require education on data sovereignty, ethical data sharing, the difference between local and cloud-based models, and the risks associated with AI solutions. This is particularly urgent as AI technologies, such as automated clinical or case note systems, are increasingly integrated into healthcare settings (See this article). AI literacy and data literacy are no longer optional; they must become core components of social work education. Training programs should prepare social workers to critically evaluate and responsibly use AI tools, ensuring that the adoption of these technologies aligns with ethical principles and safeguards the confidentiality and dignity of those served.
▶️ Please find this article on data colonialism and data sovereignty: https://harvardlawreview.org/blog/2023/06/data-colonialism-and-data-sets/
Impacts on Marginalized Communities
AI’s potential to perpetuate bias is a valid concern, particularly in fields like social work where equity and justice are foundational. Engaging directly with communities, especially marginalized populations, can mitigate these risks. Involving stakeholders in co-designing AI solutions ensures that outcomes align with their lived experiences and values, fostering equitable and just applications. AI’s role in social work must also address systemic social inequities. Concerns about AI as a “job taker” or as a tool that disempowers communities are valid but not inevitable. When applied responsibly, AI can democratize access to tools and skills that were once limited to experts. Training non-technical individuals to use AI tools has opened pathways for professional growth and empowered individuals to create meaningful value in their communities. This approach repositions AI from being a disruptor to becoming an enabler of equity and opportunity.
Historically, the introduction of any groundbreaking technology has been met with societal resistance and confusion. The internet faced similar skepticism upon its emergence, yet numerous studies have demonstrated its role in expanding equity, such as improving access to information, facilitating knowledge dissemination, and creating economic opportunities. The internet has enabled barrier-free communication, allowing people to connect across distances, share knowledge, and access resources. Immigrants, for example, have used the internet to obtain critical information for resettlement and integration. Similarly, discussions around AI focus on whether it will become a “great equalizer” or exacerbate existing inequalities. As AI rapidly permeates various sectors, the social work profession has a critical role in ensuring that this technology accelerates equity rather than deepening disparities. Social workers must actively advocate for AI applications that serve and empower underserved populations, including low-income individuals, racial and ethnic minorities, women, older adults, and children.
Moreover, AI must be developed and deployed by these communities and for their benefit. Social workers should monitor AI’s development and implementation to ensure that it aligns with the principles of justice, fairness, and inclusivity. By leveraging AI to amplify the voices of marginalized groups and address their unique challenges, social work can guide this technology toward creating a more equitable society. Meaningful integration of AI into social work requires direct engagement with communities. Co-designing AI solutions with the input of those who will use or be affected by them ensures that these tools are responsive to real-world needs and challenges. This collaborative approach not only promotes trust but also enhances the impact of AI by grounding it in community-driven priorities.