Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Finance Experts Panel | Jiang Xiaojuan Discusses AI for Good: What Is Good, How to Do Good, Who Will Do Good
Written by / Professor Jiang Xiaojun of the University of the Chinese Academy of Social Sciences, former Deputy Secretary-General of the State Council
Introduction: Recently, the 2026 China Digital Economy Development and Governance Academic Annual Conference (Chongqing) was successfully held at Southwest University of Political Science and Law. Professor Jiang Xiaojun, of the University of the Chinese Academy of Social Sciences and former Deputy Secretary-General of the State Council, delivered a keynote speech titled “AI for Good: What Is Good, How to Do Good, Who Will Do Good.”
Jiang Xiaojun believes that there have been many discussions about “AI for Good,” and there is a high level of consensus in terms of ideas, but there is insufficient discussion on how to realize and who will implement these “goods.” This issue needs to be examined within the framework of social science knowledge systems.
What is Good: A Social Science Perspective
For a long time, there have been many discussions about AI for Good, with a high degree of consensus in ideas. For example, from UNESCO’s release of the “Robotics Ethics Report” (Preliminary Draft Report of COMEST on Robotics Ethics) in 2016 to the Paris AI Action Summit in 2025, there is a high consensus on the principles of AI governance.
Concepts such as safety, transparency, non-discrimination, explainability, traceability, fairness and justice, inclusiveness and openness, respect for privacy, shared benefits, human-centeredness, and human control have been repeatedly discussed. However, there is a lack of sufficient discussion on how to realize and who will execute these “goods.” Currently, these discussions are mainly conducted by the relevant enterprises and technical groups within the “alignment” framework, which are one-sided, variable, and lack general stability.
I feel that this issue should be analyzed within the social science knowledge system. The broad concept of “good” is precisely the theme and purpose of many social science studies. Whether technology is truly good depends fundamentally on whether it promotes economic development, social progress, and the happiness of the people—i.e., whether it enhances human well-being.
Social sciences can not only propose ideas for doing good but also provide evaluation standards, implementation paths, and behavioral subjects within a universal knowledge system, backed by academic accumulation and theoretical capacity.
1
Reasonableness Is Good: Efficient Resource Allocation, Increased Social Welfare, and Fair Distribution
“Reasonableness” is a core concept in economics. Economics defines “reasonableness” as improving resource allocation efficiency, increasing social welfare, and achieving relatively fair distribution.
Under this goal, economics has clear evaluation standards and indicators: improving total factor productivity, increasing input-output ratios, income growth, promoting innovation and investment, all measuring resource allocation efficiency; improving education and healthcare levels, perfecting social security systems, etc., are indicators of increased social welfare. Based on these indicators, AI has made significant contributions to improving total factor productivity and social welfare growth, making the goodness of technology evident.
To realize “reasonableness,” economics offers implementation paths and behavioral subjects. For example, allowing the market to play a decisive role in resource allocation related to AI development is a key path, which necessarily requires enterprises to be the main actors. Of course, the market involves not only enterprises but also a good “market environment” such as fair competition and equal access, which requires sound and improved market regulation.
Using fair distribution as a measure, AI is not yet considered “good.” Gini coefficients, income gaps, and other indicators are used in economics to assess whether development outcomes are relatively fairly distributed. By these measures, AI’s current impact is mainly negative, indicating a “not good” influence. On one hand, wealth is increasingly concentrated among a very small number of successful innovators; on the other hand, AI’s substitution effects mainly impact low- and middle-income groups, and there is no evidence yet that continued AI development can improve or reverse this situation.
Looking at past technological progress, solving this problem requires efforts from AI companies themselves and better government roles. Governments need to balance AI technologies that mainly replace labor with new employment opportunities created by AI, and fulfill their responsibilities in improving long-term social security systems.
2
Usefulness Is Good: Benefits Beyond GDP for Consumers
Some technological advances’ benefits cannot be measured solely by standard GDP growth but can bring substantial consumer surplus or utility—simply put, bringing convenience, happiness, and a sense of well-being to the people. AI’s impact in this area is particularly prominent.
AI brings the good of convenience. The benefits are significant, but many are not reflected in GDP. For example, consumers using online platforms, AI models, and intelligent agents for self-service bring great convenience, yet these do not generate economic activities counted in GDP; in fact, they often replace services that would have been counted, such as self-service ticket booking replacing traditional booking services, free online information replacing newspaper subscriptions, emails replacing postal mail, and many free services.
The cultural industry is a prime example. Platforms and generative models enable everyone to enjoy more music, books, videos, and richer cultural products, greatly increasing cultural consumption. Meanwhile, the market size of cultural products measured by GDP has not grown proportionally.
For instance, data from the Recording Industry Association of America shows that U.S. music industry revenue declined from $14.6 billion in 1999 to $7.5 billion in 2016. The benefits brought by digital music to consumers cannot be measured by GDP. While platforms offering free services generate GDP through advertising, studies show that the GDP scale of these services is far less than the benefits and substitutes they provide. Clearly, AI brings benefits in utility.
AI also promotes equality. It introduces vast numbers of ordinary consumers into consumption and creative fields previously accessible mainly to high-income and highly educated groups. For example, in cultural consumption, consumers with limited literacy can use AI to generate images and videos, enriching their cultural experience; low-income consumers can access free platform services to enjoy expensive cultural products and services offline (such as performances in high-end theaters).
Similarly, in cultural and creative fields, ordinary people with limited professional skills can turn their creative inspirations into cultural products and share with others. Influencers on social networks not only sell their products and services but also interact with fans, sharing lifestyles, emotions, fashion, and dreams, fulfilling consumers’ spiritual and psychological needs.
Benefits expressed through free, entertainment, and mutual aid cannot be measured by GDP growth or income increase but can be assessed through methods like conditional value assessment or willingness-to-pay surveys. For example, asking consumers how much they would pay to obtain these benefits or how much they would need to be compensated to give up certain free benefits, such as using apps like “Xiaohongshu” or large free models, then calculating the overall societal benefit.
Research shows that the ratio of benefits received by low-income groups to their monetary income is significantly higher than that of high-income groups, indicating that AI indeed promotes equality and enhances low-income welfare.
Benefits can also be “not good.” Some consumer behaviors that bring temporary psychological pleasure may cause deep, long-term physical and mental harm—for example, addiction to online games or being trapped in information cocoons that limit understanding. Society widely agrees on the harm of these issues, and affected individuals suffer but find it hard to escape.
Technologists and users must exercise restraint and self-discipline. Without effective measures, these harmful behaviors should not be pursued; if adverse consequences occur, technological means should be used to restrict and limit them. Just as product manufacturers are responsible for product safety, harmful products that threaten health and life should not be sold.
Additionally, government and societal cooperation are necessary for response and management. For “evil” behaviors that society widely condemns—such as challenging human values, invading privacy, or promoting terrorism—public authorities must take strong action.
3
Consensus Is Good: Societal Agreement on Long-term Technological Consequences
Many disciplines within social sciences study “consensus,” for example, social consensus in sociology represents a high level of societal agreement. This article defines “consensus” as “the social agreement and social cohesion determined by the greatest common denominator,” and discusses the ethical issues of technology in the AI era from the perspective of consensus.
While ethical issues in science have existed for a long time, they are especially prominent in the AI era, with fundamental changes in nature. Historically, science was about “discovering natural laws,” which are endogenous laws within natural order—patterns formed by the interplay and evolution of forces over billions of years. Now, AI endeavors to create new states that do not exist in natural evolution, aiming to establish new orders—many explorations seek to alter human natural conditions or social states.
For example, in the most intensive “AI for science” applications in life sciences, many studies attempt to change our physiology, reproduction, cognition, and even intervene in consciousness formation, leading to shifts in human subjectivity and control over related behaviors. Some aim to create new life forms whose long-term consequences are unknown—even to the scientists who invent them. Think about it: these are quite different from past scientific discoveries.
In such scenarios, whether society agrees with certain scientific directions becomes critically important—that is the “consensus” discussed here. I once told a scientist I greatly respect that I was curious and hopeful about a particular research project he was undertaking. As an economist, I couldn’t judge it immediately, but as a human being, I would say that project is completely “not consensual.”
When scientists try to change human traits and natural laws shaped over millions of years, it becomes a major issue relevant to everyone. The public must be informed and involved, expressing whether they agree. Such discussions, with a strong scientific tone, are difficult to advance solely through willingness-to-pay assessments; they require transparent, collective deliberation.
Scientists have a responsibility to explain all possible consequences to the public, not just the benefits, and to allow society to participate in thorough discussions to form the greatest common denominator of social consensus. Only through full expression and ongoing debate can we approach “consensus” and find a practical position. We must prevent technology-driven “innovations” from being rushed by a few irresponsible or shortsighted experts, leading to irreversible harm. Ultimately, the demand for consensus must be present in issues like AI for Good.
Mechanism Exploration: Multi-party Cooperation to Promote AI for Good
Let’s examine mechanisms for achieving “for good.” Besides the natural outcome of “utility benefits” from the technology itself, “reasonableness” and especially “consensus” do not happen automatically. So, where do the incentives for doing good come from? How can we design corresponding mechanisms?
Practical experience shows that at multiple levels, there are forces both supporting “for good” and causing “not good” outcomes. The forces of “not good” and “for good” in the AI era differ from previous times. Achieving “for good” requires both self-restraint and social constraints.
First, the motivation of AI innovators and producers to do good is significant and effective. A key reason is that AI needs large-scale application; if its “goodness” lacks societal consensus, it cannot be well and sustainably applied. The high societal concern for AI safety and ethics exerts pervasive, strong, and continuous pressure and value guidance on enterprises and entrepreneurs.
Maintaining reputation requires producers to “do good.” When perceived as “not good,” they must respond and adjust quickly. In 2023, OpenAI faced widespread criticism for training with sensitive user data, and promptly promised not to do so again. Several leading domestic AI companies have also responded well in similar cases. From this perspective, the incentive mechanism for “doing good” is becoming more pervasive and powerful in this era.
Second, distributed governance is a distinctive feature of AI for Good governance. Unlike traditional industries, AI and data applications are highly scenario-based. Previously, market resource allocation was one-to-one, but in the AI era, resource allocation is cluster-based and scenario-specific.
Digital government, smart cities, intelligent transportation, healthcare, and low-altitude industries require multiple actors to allocate resources—what we call distributed resource allocation. In this model, stakeholders form various communities around specific scenarios, making autonomous choices about transactions and cooperation. Each scenario has its own rules—platforms have transaction rules, return policies, penalties for violations, etc.—defining what is “good” and “not good” within that context. Participants follow these rules, giving these communities governance functions, which can be called distributed governance.
Third, public authority governance is indispensable. Some serious “not good” outcomes cannot be left to market and social bargaining but require explicit bans—such as unauthorized invasion of privacy, dissemination of false information, terrorism, hate speech, etc.
Moreover, for effective market and social governance, the government’s most important role is mandatory transparency. Companies must allow consumers to quickly understand their user agreements, and transparency of agreement details is crucial. For innovations related to humanity and society, providers should clearly communicate what they are doing and the potential consequences.
Finally, government signaling is especially important. Laws need to be relatively stable; rapid updates are difficult and unnecessary before the situation stabilizes. However, governments can do much through issuing guidelines, showcasing best practices, criticizing improper actions, and engaging with relevant companies—all of which have a significant guiding effect on AI for Good.
Returning to the main point: social sciences must play a vital role in promoting AI for Good. With deep disciplinary foundations, social sciences enhance our ability to judge good and evil in AI. They have contributed outstandingly in resource allocation efficiency, social welfare benefits, wealth fairness, public perception assessment, and social harmony maintenance. In the AI era, we must work harder, take responsibility, and stand at the center and forefront of discussions, practices, and theoretical development of AI for Good.
Source | Based on Professor Jiang Xiaojun’s现场发言内容整理
Editor | Lan Yinfan
Reviewer | Qin Ting