I just read about something interesting. Chinese AI models are becoming really strong, but people are worried about them. Why is that?
Ah, that’s a big topic right now! These models are great at tasks like coding and reasoning, but some people, including the CEO of Hugging Face, are concerned about censorship.
Censorship? You mean like blocking things online?
Exactly. For example, if you ask a Chinese AI about the Tiananmen Square protests, it might not give you a real answer because it’s designed to avoid sensitive topics.
That’s kind of like a teacher skipping hard questions in class. But why does it matter if the AI is so good at other things?
Good analogy! The issue is that censorship could spread if other countries use these models. It’s like buying a textbook with missing pages—it could affect what you learn.
Oh, so if companies in the U.S. or Europe use these models, the censorship could show up here too?
Exactly. That’s what Clement Delangue, the CEO of Hugging Face, is worried about. He thinks AI should be developed by many countries, not dominated by just one or two. That way, ideas stay balanced.
Makes sense. But why are Chinese models getting so good so quickly?
China is really into open source AI, which means sharing their code publicly. This lets developers improve the models faster. But they still have to follow government rules, like including censorship.
So, are all Chinese AI models like this?
Not all. For example, Alibaba has a model on Hugging Face called Qwen2.5 that doesn’t seem to censor much. But another one from Alibaba, QwQ-32B, does censor sensitive topics.
It’s like some textbooks being complete and others having missing pages. Why do companies still use them?
Because these models are really powerful and cost-effective. But it’s a trade-off between performance and ethical concerns.
That’s tricky. So, what’s the solution?
Experts like Delangue say we need to make sure AI is developed worldwide, not just in one or two places. That way, no single country’s rules or values dominate.
Got it. AI should be like a class project where everyone contributes, not just one person writing it all. That way, it’s fair.
Exactly! And we also need to keep talking about ethics to make sure AI stays helpful for everyone.