Why India risks falling way behind in the AI race
text size

Why India risks falling way behind in the AI race

Residents take part in the Jugalbandi bot field trial in Bengaluru, India, on April 19. Bloomberg
Residents take part in the Jugalbandi bot field trial in Bengaluru, India, on April 19. Bloomberg

India's tech industry is being less than bold in embracing artificial intelligence. It's hoping to create solutions for corporate clients by building on top of somebody else's investment in foundational technologies, hardly a strategy for pathbreaking success.

ChatGPT's high-voltage debut last year has galvanised China. Baidu Inc's Ernie, which claims to have outperformed Microsoft Corp-backed OpenAI's model on some measures, has pulled Ant Group Co and JD.com into the bot-building race.

Tech czars like Wang Xiaochuan, the founder of the search engine Sogou, have also joined the quest, drawing talent to the industry. On money flow, the US is still beating China six to one, but the number of venture deals in the Asian country's AI industry is already outpacing consumer tech, according to Preqin data.

India's startup landscape, meanwhile, is caught in a time warp, with embarrassed investors marking down their stakes in Byju's, an online education company collapsing under the weight of its own reckless growth. The easy funding from the pandemic era has dried up. As financiers push founders for profitability, they're discovering that in many cases even the revenue is fake.

This was the perfect time for the traditional Indian coding powerhouses -- the likes of Tata Consultancy Services Ltd and its rival Infosys Ltd -- to put their superior financial muscle to use and assert leadership in generative AI. But they have their own governance challenges. TCS is distracted by a bribes-for-jobs scandal in the US that it is desperately trying to downplay. Infosys is busy managing the blowback from its association with an Australian lobbying firm in the centre of a parliamentary inquiry Down Under.

Even without those challenges, the outsourcing specialists aren't exactly in a sweet spot. Demand for their services is weak, particularly because of the turmoil in global banking. Decisions on IT spending have slowed. Keener competition for a smaller pie could mean a fall in order wins and deterioration in pricing, JPMorgan Chase & Co analysts said earlier this month. Meanwhile, the Indian firms' wage bills are bloated, thanks to their hiring spree during the pandemic when clients were scrambling to digitise their operations.

No wonder then that the industry's approach to AI is defensive, geared toward assuring investors that the technology poses little threat to its time-tested model of labour-cost arbitrage. When three lines of C programming replaced 30 lines of assembly language, it didn't lead to mass layoffs but an explosion in code-writing. Similarly, when outsourcing made enterprise software cheaper, IT budgets didn't deflate. Volumes rose, as prices fell. Why should this time be different, asks the TCS annual report for 2022–2023.

This is a rather phlegmatic reaction to a revolution whose possibilities are beginning to scare its own creators.

ChatGPT can surely write snippets of code or run a quality check on them, potentially reducing billing hours. But that's hardly the point that needs addressing. Being around machines that are smarter than any of us has troubling prospects for the future of humanity, especially if the algorithms come to be controlled by evil actors.

Even leaving aside those profound concerns about a potentially dystopian future, the more prosaic questions are also of significance for users of enterprise software. Companies from banking to retail and aviation must decide their engagement with so-called large language models. And they can't be sure if taking something off the shelf is good for data privacy. What exactly are Indian firms doing to grab this opportunity?

Bengaluru-based Infosys has adopted a mix-and-match strategy, so its clients can choose from 150 pre-trained models across more than 10 platforms, and then run them on any cloud or in-house servers. The TCS annual report says that its research in large language models is oriented toward "creating techniques for controlled code generation, question answering, consistent image generation, solving optimization problems and other core AI problems".

However, if Alphabet Inc is cautioning employees about how much information they can share with chatbots, including its own Bard, then how can TCS or Infosys assume that global multinationals will be comfortable pitching their tents on platforms available to just about anyone?

Indian software services firms also ought to be building language models from scratch for themselves and their customers. Yes, it takes computational power and engineering talent to train neural network-based programmes on vast amounts of natural-language inputs. But to not go down that route and look to connect clients via application programming interfaces, or APIs, to existing products is unnecessarily timid, especially when no serious business might want to rely on a publicly available external foundational model for mission-critical tasks.

Google's own research on training data extraction, or the potential for models to leak details from the data on which they're trained, shows that the risk is very real.

Creating well-guarded, proprietary foundational technologies isn't particularly resource-intensive. To Nvidia Corp cofounder Jensen Huang, whose chips are at the centre of the AI excitement, even a modest US$10 million (352.7 million baht) budget for large-scale models is not unrealistically low. Countries that aren't traditionally known as tech producers are also getting noticed for their breakthroughs. Abu Dhabi's Technology Innovation Institute has made its Falcon 40B -- trained on 40 billion parameters -- royalty-free for commercial use.

The Chinese have clearly not bought into the idea that Silicon Valley will control the keys to generative AI. While Indian software firms' excessive service orientation has meant very few successes in developing products, now is the time for some ambition, and a new strategy that goes beyond charging customers a fee for tweaking OpenAI's GPT-4, Google's Bard or Meta Platforms Inc's LLaMA with specialist data. (Disclosure: Bloomberg has announced its own language model for finance.)

On a recent visit to the country, OpenAI Chief Executive Officer Sam Altman was asked if someone in India with $10 million to invest should dare to build something original in AI.

He said: "The way this works is we're going to tell you it's totally hopeless to compete with us on training foundation models [so] you shouldn't try, and it's your job to like, try anyway."

The message from Abu Dhabi is very clear: Bengaluru should try anyway. ©2023 Bloomberg

Andy Mukherjee is a Bloomberg Opinion columnist covering industrial companies and financial services in Asia. Previously, he worked for Reuters, the Straits Times and Bloomberg News.

Andy Mukherjee

Opinion columnist

Andy Mukherjee is a Bloomberg Opinion columnist covering industrial companies and financial services.

Do you like the content of this article?