Kazu Gomi has a giant view of the know-how world from his perch in Silicon Valley. And as president and CEO of NTT Analysis, a division of the massive Japanese telecommunications agency NTT, Gomi can management the R&D price range for a large chunk of the fundamental analysis that’s executed in Silicon Valley.
And maybe it’s no shock that Gomi is pouring some huge cash into AI for the enterprise to find new alternatives to reap the benefits of the AI explosion. Final week, Gomi unveiled a brand new analysis effort to deal with the physics of AI and nicely as a chip design for an AI inference chip that may course of 4K video sooner. This comes on the heels of analysis tasks introduced final yr that would pave the way in which for higher AI and extra vitality environment friendly knowledge facilities.
I spoke with Gomi about this effort within the context of different issues massive firms like Nvidia are doing. Bodily AI has develop into a giant deal in 2025, with Nvidia main the cost to create artificial knowledge to pretest self-driving vehicles and humanoid robotics to allow them to get to market sooner.
And constructing on a narrative that I first did in my first tech reporting job, Gomi stated the corporate is doing analysis on photonic computing as a approach to make AI computing much more vitality environment friendly.
A long time in the past, I toured Bell Labs and listened to the ambitions of Alan Huang as he sought to make an optical pc. Gomi’s staff is attempting to do one thing related a long time later. If they will pull it off, it might make knowledge facilities function on loads much less energy, as gentle doesn’t collide with different particles or generate friction the way in which {that electrical} indicators do.
Throughout the occasion final week, I loved speaking to a little bit desk robotic known as Jibo that swiveled and “danced” and instructed me my important indicators, like my coronary heart charge, blood oxygen degree, blood strain, and even my ldl cholesterol — all by scanning my pores and skin to see the tiny palpitations and coloration change because the blood moved by way of my cheeks. It additionally held a dialog with me through its AI chat functionality.
NTT has greater than 330,000 workers and $97 billion in annual income. NTT Analysis is a part of NTT, a world know-how and enterprise options supplier with an annual R&D price range of $3.6 billion. About six years in the past it created an R&D division in Silicon Valley.
Right here’s an edited transcript of our interview.

VentureBeat: Do you’re feeling like there’s a theme, a prevailing theme this yr for what you’re speaking about in comparison with final yr?
Kazu Gomi: There’s no secret. We’re extra AI-heavy. AI is entrance and middle. We talked about AI final yr as nicely, but it surely’s extra vivid in the present day.
VentureBeat: I needed to listen to your opinion on what I absorbed out of CES, when Jensen Huang gave his keynote speech. He talked loads about artificial knowledge and the way this was going to speed up bodily AI. As a result of you’ll be able to check your self-driving vehicles with artificial knowledge, or check humanoid robots, a lot extra testing might be executed reliably within the digital area. They get to market a lot sooner. Do you’re feeling like this is smart, that artificial knowledge can result in this acceleration?
Gomi: For the robots, sure, 100%. The robots and all of the bodily issues, it makes a ton of sense. AI is influencing so many different issues as nicely. In all probability not all the pieces. Artificial knowledge can’t change all the pieces. However AI is impacting the way in which firms run themselves. The authorized division could be changed by AI. The HR division is changed by AI. These sorts of issues. In these eventualities, I’m unsure how artificial knowledge makes a distinction. It’s not making as massive an influence as it will for issues like self-driving vehicles.
VentureBeat: It made me assume that issues are going to return so quick, issues like humanoid robots and self-driving vehicles, that we’ve got to resolve whether or not we actually need them, and what we would like them for.
Gomi: That’s a giant query. How do you take care of them? We’ve positively began speaking about it. How do you’re employed with them?

VentureBeat: How do you employ them to enrich human employees, but additionally–I believe one in every of your individuals talked about elevating the usual of dwelling [for humans, not for robots].
Gomi: Proper. When you do it proper, completely. There are various good methods to work with them. There are definitely unhealthy eventualities which can be potential as nicely.
VentureBeat: If we noticed this a lot acceleration within the final yr or so, and we are able to anticipate artificial knowledge will speed up it much more, what do you anticipate to occur two years from now?
Gomi: Not a lot on the artificial knowledge per se, however in the present day, one of many press releases my staff launched is about our new analysis group, known as Physics of AI. I’m trying ahead to the outcomes coming from this staff, in so many various methods. One of many attention-grabbing ones is that–this humanoid factor comes close to to it. However proper now we don’t know–we take AI as a black field. We don’t know precisely what’s happening contained in the field. That’s an issue. This staff is trying contained in the black field.
There are various potential advantages, however one of many intuitive ones is that if AI begins saying one thing fallacious, one thing biased, clearly you could make corrections. Proper now we don’t have an excellent, efficient approach to right it, besides to simply preserve saying, “That is fallacious, you must say this as an alternative of that.” There may be analysis saying that knowledge alone received’t save us.
VentureBeat: Does it really feel such as you’re attempting to show a child one thing?
Gomi: Yeah, precisely. The attention-grabbing supreme state of affairs–with this Physics of AI, successfully what we are able to do, there’s a mapping of information. Ultimately AI is a pc program. It’s made up of neural connections, billions of neurons related collectively. If there’s bias, it’s coming from a selected connection between neurons. If we are able to discover that, we are able to finally cut back bias by chopping these connections. That’s the best-case state of affairs. Everyone knows that issues aren’t that simple. However the staff might be able to inform that in the event you minimize these neurons, you would possibly have the ability to cut back bias 80% of the time, or 60%. I hope that this staff can attain one thing like that. Even 10% continues to be good.
VentureBeat: There was the AI inference chip. Are you attempting to outdo Nvidia? It looks as if that will be very onerous to do.

Gomi: With that exact undertaking, no, that’s not what we’re doing. And sure, it’s very onerous to do. Evaluating that chip to Nvidia, it’s apples and oranges. Nvidia’s GPU is extra of a general-purpose AI chip. It will possibly energy chat bots or autonomous vehicles. You are able to do every kind of AI with it. This one which we launched yesterday is just good for video and pictures, object detection and so forth. You’re not going to create a chat bot with it.
VentureBeat: Did it seem to be there was a possibility to go after? Was one thing not likely working in that space?
Gomi: The brief reply is sure. Once more, this chip is certainly personalized for video and picture processing. The bottom line is that with out decreasing the decision of the bottom picture, we are able to do inference. Excessive decision, 4K photos, you should use that for inference. The profit is that–take the case of a surveillance digicam. Possibly it’s 500 meters away from the article you wish to have a look at. With 4K video you’ll be able to see that object fairly nicely. However with typical know-how, due to processing energy, you need to cut back the decision. Possibly you could possibly inform this was a bottle, however you couldn’t learn something on it. Possibly you could possibly zoom in, however you then lose different info from the world round it. You are able to do extra with that surveillance digicam utilizing this know-how. Increased decision is the profit.

VentureBeat: This could be unrelated, however I used to be excited by Nvidia’s graphics chips, the place they had been utilizing DLSS, utilizing AI to foretell the following pixel you could draw. That prediction works so nicely that it obtained eight instances sooner on this era. The general efficiency is now one thing like–out of 30 frames, AI would possibly precisely predict 29 of them. Are you doing one thing related right here?
Gomi: One thing associated to that–the rationale we’re engaged on this, we had a undertaking that’s the precursor to this know-how. We spent numerous vitality and assets prior to now on video codec applied sciences. We offered an early MPEG decoder for professionals, for TV station-grade cameras and issues like that. We had that base know-how. Inside this base know-how, one thing much like what you’re speaking about–there’s a little bit of object recognition happening within the present MPEG. Between the frames, it predicts that an object is shifting from one body to the following by a lot. That’s a part of the codec know-how. Object recognition makes that occur, these predictions. That algorithm, to some extent, is used on this inference chip.
VentureBeat: One thing else Jensen was saying that was attention-grabbing–we had an structure for computing, retrieval-based computing, the place you go right into a database, fetch a solution, and are available again. Whereas with AI we now have the chance for reason-based computing. AI figures out the reply with out having to look by way of all this knowledge. It will possibly say, “I do know what the reply is,” as an alternative of retrieving the reply. It could possibly be a special type of computing than what we’re used to. Do you assume that might be a giant change?
Gomi: I believe so. Plenty of AI analysis is occurring. What you stated is feasible as a result of AI has “information.” As a result of you’ve that information, you don’t must go retrieve knowledge.

VentureBeat: As a result of I do know one thing, I don’t must go to the library and look it up in a guide.
Gomi: Precisely. I do know that such and such occasion occurred in 1868, as a result of I memorized that. You could possibly look it up in a guide or a database, but when you recognize that, you’ve that information. It’s an attention-grabbing a part of AI. Because it turns into extra clever and acquires extra information, it doesn’t have to return to the database every time.
VentureBeat: Do you’ve any specific favourite tasks happening proper now?
Gomi: A pair. One factor I wish to spotlight, maybe, if I might decide one–you’re trying carefully at Nvidia and people gamers. We’re placing numerous deal with photonics know-how. We’re excited by photonics in a few other ways. Once you have a look at AI infrastructure–you recognize all of the tales. We’ve created so many GPU clusters. They’re all interconnected. The platform is big. It requires a lot vitality. We’re operating out of electrical energy. We’re overheating the planet. This isn’t good.
We wish to tackle this difficulty with some totally different methods. Certainly one of them is utilizing photonics know-how. There are a few other ways. First off, the place is the bottleneck within the present AI platform? Throughout the panel in the present day, one of many panelists talked about this. Once you have a look at GPUs, on common, 50% of the time a GPU is idle. There’s a lot knowledge transport occurring between processors and reminiscence. The reminiscence and that communication line is a bottleneck. The GPU is ready for the info to be fetched and ready to jot down outcomes to reminiscence. This occurs so many instances.
One concept is utilizing optics to make these communication traces a lot sooner. That’s one factor. By utilizing optics, making it sooner is one profit. One other profit is that relating to sooner clock speeds, optics is rather more energy-efficient. Third, this entails numerous engineering element, however with optics you’ll be able to go additional. You’ll be able to go this far, and even a few toes away. Rack configuration could be a lot extra versatile and fewer dense. The cooling necessities are eased.
VentureBeat: Proper now you’re extra like knowledge middle to knowledge middle. Right here, are we speaking about processor to reminiscence?

Gomi: Yeah, precisely. That is the evolution. Proper now it’s between knowledge facilities. The following section is between the racks, between the servers. After that’s throughout the server, between the boards. After which throughout the board, between the chips. Finally throughout the chip, between a few totally different processing models within the core, the reminiscence cache. That’s the evolution. Nvidia has additionally launched some packaging that’s alongside the traces of this phased strategy.
VentureBeat: I began protecting know-how round 1988, out in Dallas. I went to go to Bell Labs. On the time they had been doing photonic computing analysis. They made numerous progress, but it surely’s nonetheless not fairly right here, even now. It’s spanned my complete profession protecting know-how. What’s the problem, or the issue?
Gomi: The state of affairs I simply talked about hasn’t touched the processing unit itself, or the reminiscence itself. Solely the connection between the 2 parts, making that sooner. Clearly the following step is we’ve got to do one thing with the processing unit and the reminiscence itself.
VentureBeat: Extra like an optical pc?
Gomi: Sure, an actual optical pc. We’re attempting to do this. The factor is–it sounds such as you’ve adopted this matter for some time. However right here’s a little bit of the evolution, so to talk. Again within the day, when Bell Labs or whoever tried to create an optical-based pc, it was mainly changing the silicon-based pc one to at least one, precisely. All of the logic circuits and all the pieces would run on optics. That’s onerous, and it continues to be onerous. I don’t assume we are able to get there. Silicon photonics received’t tackle the problem both.
The attention-grabbing piece is, once more, AI. For AI you don’t want very fancy computations. AI computation, the core of it’s comparatively easy. Every little thing is a factor known as matrix-vector multiplication. Info is available in, there’s a consequence, and it comes out. That’s all you do. However you need to do it a billion instances. That’s why it will get difficult and requires numerous vitality and so forth. Now, the great thing about photonics is that it will probably do that matrix-vector multiplication by its nature.
VentureBeat: Does it contain numerous mirrors and redirection?

Gomi: Yeah, mirroring after which interference and all that stuff. To make it occur extra effectively and all the pieces–in my researchers’ opinion, silicon photonics might be able to do it, but it surely’s onerous. You need to contain totally different supplies. That’s one thing we’re engaged on. I don’t know in the event you’ve heard of this, but it surely’s lithium niobate. We use lithium niobate as an alternative of silicon. There’s a know-how to make it into a skinny movie. You are able to do these computations and multiplications on the chip. It doesn’t require any digital parts. It’s just about all executed by analog. It’s tremendous quick, tremendous energy-efficient. To some extent it mimics what’s happening contained in the human mind.
These {hardware} researchers, their aim–a human mind works with possibly round 20 watts. ChatGPT requires 30 or 40 megawatts. We are able to use photonics know-how to have the ability to drastically upend the present AI infrastructure, if we are able to get all the way in which there to an optical pc.
VentureBeat: How are you doing with the digital twin of the human coronary heart?
Gomi: We’ve made fairly good progress during the last yr. We created a system known as the autonomous closed-loop intervention system, ACIS. Assume you’ve a affected person with coronary heart failure. With this technique utilized–it’s like autonomous driving. Theoretically, with out human intervention, you’ll be able to prescribe the correct medicine and therapy to this coronary heart and produce it again to a standard state. It sounds a bit fanciful, however there’s a bio-digital twin behind it. The bio-digital twin can exactly predict the state of the guts and what an injection of a given drug would possibly do to it. It will possibly rapidly predict trigger and impact, resolve on a therapy, and transfer ahead. Simulation-wise, the system works. We have now some good proof that it’s going to work.

VentureBeat: Jibo, the robotic within the well being sales space, how shut is that to being correct? I believe it obtained my ldl cholesterol fallacious, but it surely obtained all the pieces else proper. Ldl cholesterol appears to be a tough one. They had been saying that was a brand new a part of what they had been doing, whereas all the pieces else was extra established. If you may get that to excessive accuracy, it could possibly be transformative for a way typically individuals must see a physician.
Gomi: I don’t know an excessive amount of about that exact topic. The standard means of testing that, in fact, they’ve to attract blood and analyze it. I’m certain somebody is engaged on it. It’s a matter of what sort of sensor you’ll be able to create. With non-invasive gadgets we are able to already learn issues like glucose ranges. That’s attention-grabbing know-how. If somebody did it for one thing like ldl cholesterol, we might convey it into Jibo and go from there.