Bitcoin price today: struggles at $111k as trade tensions, risk aversion weigh
On Tuesday, 14 October 2025, Oracle Corporation (NYSE:ORCL) showcased its strategic vision at the Oracle AI World 2025 Keynote. Larry Ellison highlighted Oracle’s distinctive approach to AI, emphasizing its dual focus on AI infrastructure and application. While Oracle is advancing AI’s transformative potential, challenges remain, particularly in building expansive AI data centers.
Key Takeaways
- Oracle is constructing a massive AI data center in Texas with over 450,000 NVIDIA GPUs.
- The company is rebuilding the Cerner code base with AI to enhance healthcare systems.
- Oracle’s AI Data Platform ensures data privacy and security for private data.
- Future AI applications include biometric security and advanced patient monitoring.
AI Infrastructure and Model Training
- Oracle is a key player in AI model training, surpassing competitors in multimodal model training.
- A new data center in Abilene, Texas, will house over 450,000 NVIDIA GB200 GPUs.
- The construction involves eight buildings across 1,000 acres, powered by grid and natural gas.
- Oracle has trained AI models like Grok for Elon Musk and is tackling complex engineering challenges.
AI Data Platform and Private Data
- Oracle’s AI Data Platform integrates multimodal models like Grok and ChatGPT with user data.
- The platform ensures data privacy, preventing unauthorized data sharing.
- Oracle’s AI Database employs RAG to vectorize data for AI model reasoning.
- An internal project predicts Oracle customer purchasing behavior using AI.
Healthcare Automation and the Ecosystem Approach
- Oracle is overhauling the Cerner code base to automate the healthcare ecosystem.
- The AI-driven approach connects patients, providers, payers, and regulators.
- AI agents optimize care quality and reimbursement by accessing medical literature and insurance rules.
- Financial AI agents provide banks with reimbursement information to facilitate hospital loans.
Future Applications of AI
- AI technologies will eliminate passwords and prevent credit card fraud through biometric security.
- Low-cost medical devices will enable continuous patient monitoring and emergency connectivity.
- AI enhances diagnostic imaging, aiding rapid cancer detection and pathogen identification.
- Robotic greenhouses and modified wheat plants aim to increase food production and reduce CO2.
In conclusion, Oracle’s presentation at the Oracle AI World 2025 Keynote demonstrated its commitment to advancing AI technology across industries. For more details, refer to the full transcript below.
Full transcript - Oracle AI World 2025 Keynote:
Larry Ellison, Oracle: Hi, everybody. Let’s see. Okay. It says AI changes everything. That’s a kind of a big statement, everything.
I think it’s pretty close. Okay. So I’m going to talk a little bit about how Oracle’s been responding to these changes that started Well, I guess I guess they started in earnest when ChatGPT three point o came out and suddenly AI models started sounding a little bit like us. Okay. There are two big phases of this AI technology.
One is the dawning of the of the AI era, which is a bunch of companies building these enormous AI models. They actually you know, they’re they’re actually an AI model right now, what’s called a multimodal AI model, is made up of several neural networks, like your brain has several parts. It’s actually a very it’s kind of a perfect analogy. To do vision, you use one part of your brain. To do language, you use a different part of your brain.
When you build an AI model, you use a different neural network for vision. Seeing something, seeing its edges, seeing its shape, seeing its color, seeing it move. You use one neural network for seeing it and it’s quite a different neural network for recognizing what it is, identifying it. And then a third neural network to classify it and organize it and reason with that data. So very much like our brains, the modern AI system and a modern AI model is a multimodal model.
It has multiple neural networks to look at different kinds of data, video data, textual data, hearing, things like that. Well, what’s going on right now is series of companies are spending vast fortunes training these AI models on publicly available data on the Internet, enormous amounts of data. This has become this AI training. It’s very apparent after a few years of it. It’s the largest, fastest growing business in human history, bigger than the railroads, bigger than the industrial revolution.
I mean, is a whole new world that is dawning. There’s there’s the building of the models, and then once those models are built, it’s the actual using those models to solve very important problems. Early diagnosis of cancer, for example. But there’ll be a lot of surgery that is more precise and more accurate than human beings can do. Robots will be much better surgeons than human beings can be for all sorts of interesting reasons you might not guess.
Anyway, the big opportunity in AI training is upon us and Oracle is a major participant in building datacenters to do AI training. But the much, much larger opportunity, the one that will truly change the world isn’t the creation of the models themselves, the training of the models. What will change the world is when we start using these remarkable electronic brains, and that’s what they are. These are remarkable electronic brains to solve humanity’s most difficult and enduring problems. Now, there’s one thing that’s interesting where Oracle is explicitly involved, which is, as I said earlier, these AI models are trained on publicly available data, all the data on the Internet.
So if you look at ChatGPT or Anthropic, ROC, LAMA, what have you, they’re all they’re all trained on all of the data on the Internet. In other words, publicly available data. But for these models to reach their peak value, you need to train them not just on publicly available data, but you need to make private, privately owned data available to those models as well, and that’s where Oracle plays a particularly important role because most of the world’s high value data is already in an Oracle database. We just had to change, and that is past tense, we had to change, we did change that database so that Oracle database can take the data that’s already in the Oracle database and make it available to AI models for reasoning. So the AI model can reason on not just public data, but on private data.
AI is an incredible tool. Some people think it’s going to replace all human beings and all of our human endeavors. I don’t think that’s true. It will help us solve problems we couldn’t solve on our own, however. It will make us much better scientists and engineers and teachers and chefs and bricklayers and surgeons and what what have you.
We’ve never built a tool anything like this. I pressed the button and the slide didn’t move. I pressed the button again. And this is not an AI device. One more time and then I’m just going to say the word slide.
There we go. Okay. Well, did both. Who knows why it moved? Okay.
I remember when this wasn’t called AI World, remember when it was called Cloud World a long long long time ago? I presentation had about AI, even though it was called Cloud World, I was still allowed to do a presentation on AI. I said, Is AI the most important technology in human history? We’re going to find out soon. Well, it’s pretty clear.
The smartest people I know are working on What? I didn’t press the button. Okay. So not pressing the button. You back up the slide please.
Thank you. I’m just gonna put that down. We’re going to get a better one next time. The smartest people I know are investing fortunes. To be specific, they’re investing their fortunes in building and training these AI models.
That’s how important they are. That’s how extraordinary they are. By the way, Elon, Mark, Sam, in alphabetical order, all really smart guys, extraordinary people. People that want This AI thing, maybe it’s just a maybe just a it’s not that big a deal, maybe it’s just a bubble. Well, the Internet really I mean, the Internet was a big deal.
I mean, the most you know, the if if you look at the fortunes created at the Internet, the I mean, certainly worked out for Google. Search searches seem to pay paid off. And Elon on on that list did start PayPal and that paid off nightly nicely. But I know that I I’ve asked him. Elon, he’d say he definitely didn’t put a dime into pets.com.
And and the thing is when people talk about bubbles, what you know, what is a bubble? I mean, people get exuberant. I mean, the Internet was an incredible new technology, remains the foundation of computing and we couldn’t have AI without the Internet. So it’s incredibly important technology but the people started confusing Internet companies like PayPal or or even worse, Internet search, worse meaning better, with pets.com. I mean the fact that if I can sell pet food in an e commerce site, that suddenly means I am an Internet company.
Not really. So yes, there will be people spending money on AI because almost every tech company these days call themselves an AI company, but they are not. A lot of them are not. But AI in terms of its value, this is the highest value technology we have ever seen by far. Next slide, please.
AI. You know, it’s interesting because it’s called artificial intelligence, as opposed to artificial perception. But it does perceive, it it hears, it smells. Think about smelling. I mean, the idea that you can pick up chemicals that are just drifting around in the atmosphere and figure out what those chemicals are.
Dogs can smell cancer in patients. We should be able to do that with AI. We should be able to In fact, there’s a project I know of called the dog’s nose that I’m actually a part of, and we’re building sensors. We’re building sensors that can smell cancer or other or other illnesses. But AI perceives.
It’s got the part of the brain that that hears and sees in addition to reasoning. I mean, it it it it it can read street signs. It can read a page on a book. It it take a look at you and recognize you. Can identify the song that’s playing, you can talk to AI and ask it a question or type it out.
AI can reason logically, very quickly using language the same way we do and mathematics. And I was I remember I was over at Tesla looking at the Optimus robots and I was curious just how the robots were going to learn and then just thought about it for a minute and said, well, how would a robot learn how to clean your house or scramble eggs or play the guitar, well, it would just watch an Internet video. It’s connected to the Internet. It can learn to play piano just like we would watching an Internet video, except it would do it a little faster because it can play the Internet video at very, very high speed and be and learn to pay play that piece by Chopin in about five seconds, which I know my kids can’t learn learn to play piano that that fast because I listen to them practice every day. And five seconds is out of the question.
AI robots AI robots will be much better surgeons than the best doctors. There is this very famous surgery started by doctor Mose who actually would take cancer lesions off patients’ faces and he was so famous for it because he did the least damage, he took the least amount of tissue off your skin, So cosmetically he had fantastic results. And what he did, he would take a couple of layers of skin off and then take the skin over to a microscope and look at it and see is he just taking is he taking any healthy cells yet? I mean, how deep does the cancer go? So it’s back and forth to cut a little bit of tissue, look at it on microscope, cut a little bit more tissue, look at it on my Well, AI robots aren’t just don’t play fair.
Their vision, the AI, the vision on the robot, it is microscopic. They can see what they don’t need a microscope to see individual cells. They don’t need a microscope to see where the cancer ends and the healthy tissue begins. And their coordination, it is exactly, they’re better surgeons than we are not because they’re smarter than we are, but because they have better hand eye coordination. Their eyes are way better than ours and the precision of their hands is way better than ours, so they can cut between a layer of healthy cells and a layer of cancer cells.
It’s truly stunning to watch and it’ll be very reassuring when we can go to a doctor who’s using a robot to do the surgery. The surgery will be perfect. I said this earlier but is so interesting. It’s built just like the brains, the specialized neural networks, one for vision. Literally a convolution neural network simulates the visual cortex and the visual cortex has five layers, it is right in the back of your head, and evolution produced the very first layer of V1 was just so the animal could first perceive edges of something they were looking at, then it got up to four for color and the very famous V5 for motion, detecting motion and threats in the environment.
The VITs, the vision transformers, that then took that bitmap. So so the convolutional neural network produces a bitmap map, an image, a bunch of pixels, if you will, and then the rich vision transformer then compares that to things that you already know and can recognize faces and things that are familiar. That is a transformer, that is a VIT neural network for a holistic understanding of the image and recording it. Version three of ChatGPT was the one that used the huge transformer networks that did comprehensive language and reasoning. The only drawback with that, the transformer networks, because we facial recognition long before we had the ability to converse and reason using language with the GPT network, the generalized pre trained transformer network which is is doing the language and the reasoning, but that requires enormous amounts of compute, thus the requirement for fortunes to train these models.
These networks, the transformer network is much bigger and much more complex than some of the other networks. As you think, reasoning would be more complex than vision. And then there are networks for certain types of mathematics. Anyway, the looks a lot like the brain. So the the brain has a lot of It’s amazing.
20 watt human brain. 20 watts. Anyone screwed in a 20 watt light bulb, no, that is not a lot of light, but it is enough to run 86,000,000,000 neurons and give you vision and balance and reasoning and language and creativity and the ability of deduction and inferencing, you can do all of that with this incredible this incredible what Elon calls a 20 watt meat computer. Sensation, recognition, after recognition, the ability to reason on that. Again, the visual cortex is right behind the parietal lobe, behind and below it.
The prefrontal lobe as you can see on the left side is a big language center. The brain is very specialized, so are the AI models. But we’re not building a 20 watt meet computer. We’re building a 1,200,000,000 watt AI brain. Did you ever try to do multiplication as fast as an HP calculator?
These electronic brains, AI, these AI models reason and they reason very quickly, and they can deal with a lot of data, and they can get to answers that we’ve never gotten to. This a picture of a data center we’re in the process of building. It’s up and running, part of it is up and running. Eventually, it’s going to have a half a million NVIDIA GPUs in it. By the way, to give you an idea, a 1,200,000,000 watts, what does that really mean?
That’s enough to power 1,000,004 bedroom homes in The United States. A million, that’s a pretty good sized city. And I think we got a video on the construction.
Unidentified speaker: Oracle is building the world’s largest AI cluster for OpenAI in Abilene, Texas. The project began as empty land in June 2024 and is delivering GPUs in less than one year. The cluster will contain more than 450,000 NVIDIA GB two hundreds when fully provisioned. Power is provided by a combination of grid power and on-site natural gas turbines. Capacity is provided in eight separate buildings spanning 1,000 acres, all interconnected together to support a single workload.
This site deploys the latest technology across AI accelerators, liquid cooling, and networking. More than 3,500 people work on-site each day to deliver capacity at an unprecedented rate. Demand for AI continues to exceed supply, and Oracle is committed to delivering the largest and most advanced AI clusters to support our customers all over the world.
Larry Ellison, Oracle: Well, that’s a long way from writing code in my bedroom in college. What happened? I have no idea. Okay. Okay.
So we are training, we are in the middle, we trained the very first version of Grok for Elon, we are training a number of other of these multimodal AI models. Almost all of these AI models are in the Oracle Cloud and I will come back to that. Yes, But are probably involved we are certainly involved in training more multimodal AI models than any other company. And it is very exciting and it is daunting. I mean the size of these projects that we are running, it’s not just building the network of GPUs, the computer rooms and the networks and the cooling and that was hard, by the way, that was hard in the first place.
But now we have to build the power transmission plants. There is a natural gas pipeline that goes to the gas turbines that fires up the gas turbines and then generate electricity. That electricity then has to be moved to the data center, so it is power generation, it is gas pipelines, power generation, power transmission, data centers, networks and those data centers are filled with lots and lots and lots of complex software and a lot of very smart hardworking engineers. These are enormous engineering projects, each and every one of them. What we are building, what trying we to build are these multimodal neural networks trained on all types of data, textual data, images data, audio, video.
Every publicly available piece of data plus synthetic data we train these models on. Some of the models are designed to be real time. Actually, Google has two models, one is Gemini, one is DeepMind. The DeepMind is highly specialized around molecular structures and it and one in DeepMind won a Nobel Prize last year. Not this year, last year.
Won a Nobel Prize on protein folding. Do you understand it’s it’s taking a molecule where you understand the chemical formula of that protein, a chain of amino acids and what does it look like in three d when you fold it up and it is no longer a string, it’s it’s now it’s now folded. That’s a problem that’s been we’ve been working on for a very long time about folding proteins and they solved it with the DeepMind model that that Google owns when they when they bought DeepMind in London. Elon has two AI models that are very very different. One is Grok, multimodal AI model.
The other is Tesla and it’s a real time model and real time models are have some different characteristics than let us say an anthropic which generates code or a chat GPT which is solving a legal problem or a medical problem, something like that. If you have if you’re driving cars, things happen very fast. So, yes, you have to have vision, you have to have cameras all over those cars, but if something happens, might be required to respond in a microsecond, in a millisecond at least. A microsecond is really fast, a millisecond in the car, a thousandth of a second, a ball suddenly is coming off a curb and a bike is following the ball. And you have to see it, understand what’s going on and take evasive action so there’s no accident and no one’s injured.
You you have to build things differently when you can’t afford the network traffic to go back across the network and talk to talk to a computer an AI model on a network that’s that’s far away, you need a very very low latency response time. That’s why all the Tesla cars, all the all the Tesla robots have to have local compute in the car, local compute in the robot to make an immediate decision, a very low latency decision. That’s not required, for example, if you’re writing code. I can tell you what code to write and you can you can take a moment to think about it and then give an answer. So the real time models are are a bit different than than the models that aren’t that don’t require real time, where you can you can you have some time to reason and compute your answer.
But both models are both types of models are very important and both types of models are being built. These models do multi step reasoning. I know Now, the I’m calling reasoning not long ago was called inferencing. People would talk about we’ve got to train the model, there’s one there’s one thing where we train the model and then we’re using the model when the model’s reasoning, we reduce that to just inferencing, a type of reasoning. That’s no longer true.
In the early days, that’s what models did was inferencing. Not anymore. They reason like we reasoned. There’s a list. Do deductions, they do inferencing, they do calculations, they have strategy, they have rules.
All the techniques, the reasoning techniques that we use, they simulate and use, but they think a lot faster than we do and solve problems a lot faster than we do or they solve really complicated problems that we can’t solve at all. And that’s what makes this so exciting and makes this so enormously valuable. These models can answer your questions, they can generate computer code, a lot of the code that Oracle is writing, Oracle isn’t writing. Our AI models are writing. We just tell the model what we want the program to do and then the AI comes up with a step by step process to actually do it.
We do not write the procedure, we declare our intent, but the model writes the step by step procedure, thing that we commonly think of as a computer program. They diagnose medical images far better than we do. They design drugs that we cannot. But there is a big gotcha. There is a big gotcha on these on these models and that is the models do not get trained on your private data because for some reason, private people want to keep their private data private.
And that’s not going to change. But people also want these models to reason on their private data. Have your cake, eat it too, whatever you want to call it. I want to keep my data private. I don’t want to share it with anybody else.
However, I’d like to use this enormously powerful tool to reason on my private data. And that’s what Oracle What’s one of the big things Oracle has been applying itself to in terms of solving that particular problem. And we have this new thing we’re we’re talking about this this week here in Las Vegas is the Oracle AI Data Platform. The Oracle AI database and the Oracle AI Data Platform. Interesting thing about the AI Data Platform, it includes a multimodal model of your choice.
Well, a multimodal model of your choice. Okay. That’s great. So if you want to use Grok in the Oracle Cloud, you can use Grok. If you want to use ChatGPT, you can use ChatGPT.
If you want to use Lama, you can use Lama. You want to use Gemini, you can use Gemini. We’ll attach that model, the model of your choice to not only the public data, the model is already connected to the public data, that’s done. But we give you the ability to add your private data to the model’s library of information and knowledge. So the model can reason across not just public data, but also private data while keeping your private data private, not sharing it with anybody else.
That’s very, very important and it’s not easy to do it in a highly secure way. It’s not easy. If it was easy, a lot of people would have already done it. Okay. So as I said I said, the OCI includes all of the popular multimodal models.
You can you mix and match, and we have the AI database and the AI data platform that lets you add private data to the models. In fact, I’m going to be a little more precise this time. What it really does, and it’s called RAG by the way, you basically take a data bunch that the model has not been trained on, and by the way that might be today’s stock prices. I mean, the model doesn’t know today’s news. The model hasn’t been trained on today’s news.
The model hasn’t been trained on today’s stock prices. Now, the model knows where to look for, it knows how to ask to look at today’s stock prices. It knows how to look at the ticker and get the very latest quote on today’s stock prices. You just put that information in a database that the model can access and you put your private data in an Oracle database. The new Oracle database is called an AI database not just because AI is fashionable.
The new Oracle database is called an AI database because it has this RAG capability. It has the ability to take any of the data in the Oracle database and make it accessible to the AI model by vectorizing it. So since a lot of your data is in an Oracle database already, you simply ask the Oracle database to put that data in a format the model will understand, and that’s called a vector format. The Oracle database will vectorize any data that you want to make available to the model and then reason you on it. By the way, but it’s not just data.
It’s not just data in an Oracle database, then the Oracle AI database will vectorize. Let’s say you have a lot of data in OCI object store or Amazon object store for that matter, and you’d like to make that data available to the model, to the Oracle AI data platform, No problem. The Oracle database can go into OCI object store and vectorize and create what’s called a vector index to data in OCI object store. It can go Amazon cloud storage and vectorize portions of that that belong to you and make that accessible for reasoning by the multimodal model. So you’re not restricted to data that’s just in the database.
The Oracle database can vectorize anything that’s in an Oracle database, a different database, a different Cloud, and make that data easily accessible to the AI model for reasoning. The reasoning is fascinating. The first thing that Oracle did, so the first project Oracle did in terms of taking private data and making it accessible to AI models is we took all of our customer data and we vectorized it. We basically and used RAG to make it available to the models. So we started with customer data because we think there is nothing more important to us than our customers.
Now some people who are cynical, you would say there is nothing more valuable to us than our customers, but they go hand in hand. So there were certain interesting questions we wanted to ask, we thought were extremely high value questions. There is a whole industry called customer relationship management, well actually it is not called that anymore, they changed the name to CX, customer engagement management. Whatever the name is, we know what the questions are. So, we ran this project inside of Oracle, took our private customer data, put it in an Oracle database, vectorized it, and used Rag to make it accessible to models, to a multimodal model, an AI model.
Then we asked the question, what Oracle customers are likely to buy another Oracle product in the next six months? Now, why should that be important to us? And specifically, what product each and every customer that’s going to buy something in the next six months, do you mind telling me what product they’re going to buy? Would you answer that question? They’re most likely to buy.
And then one thing, and by the way it’s not just questions that this thing does, you can ask questions, you can prompt it and get answers, but you can also ask it to do things via agents. You can create little computer programs, sometimes not so little, and ask the AI to actually do something, to orchestrate some process. Then we said, okay, let’s send a mail to all of our prospective buyers with the three best customer references encouraging them to buy. Now, that request required the generation of a computer program called an AI agent that had to figure out, okay, you were going to buy this product, you’re a bank in Switzerland, So we think the best references would be the banks in Switzerland that have already bought that product for you. Those would be the best references for you.
So all of the references would be customized based on what we know about you as a customer and the exact situation you’re in, the business you’re in, the products you have, the other banks you have good relationships with and you can call for a reference. Anyway, it’s extremely interesting that it can solve a problem like this so quickly and tell us what the Salesforce should be concentrating on at Oracle over the next six months. Amazing. Okay. So that that that application, that AI agent, if I can just back this up one, I’m going have to get back up my slide once.
Okay. The the the last thing, the last line, send an email to prospective buyers with the references. Three From that single line, we can generate the AI agent to actually do that properly. You can generate the AI agent. Or if you wanted to do a little bit more, you could get even more precise, you could add more things to it, what exactly what you want to do, what kind of letter do you want to send them, what else, make the agent even more capable and that’s actually what we did.
And and by the way, that that I don’t if you’ve heard this term. I mean, I I thought it was a little strange the first time I heard it, vibe coding. Right? Sounds very gen what is the latest one z? Sounds very gen z, which is, just say what you want the program to do, generate the prototype and try it out.
Don’t think about it too hard, just to kind of get a feeling for it and feel the vibe, I guess. But that’s actually I mean, you can use English. You can generate computer programs directly from English. Personally, I’ve had debates with other engineers here at Oracle about whether using English as a programming language is a good idea because English is notoriously imprecise and wouldn’t we be better off if we want to generate programs to create a custom, highly precise declarative language for computer programming? Well, that’s what we did at Oracle using Apex, we a declarative AI generation language to Apex for generating applications, but there are plenty of people out there still working with English and that’s fine, it’s up to you.
We don’t make those decisions for you, we just make sure that you have options. But most of the new applications that Oracle is creating now are agents, are AI agents that were generated, not handwritten, that were generated and they’re connected by workflows. The interesting thing when we generate these applications, there are no security holes in these applications because the application generator doesn’t forget things and leave things out and doesn’t make those kinds of mistakes. Every application that we generate is stateless and reliable. In other words, if computer that application was running on suddenly blows up, loses power, whatever happens, someone catches fire, that application can immediately restart in a different datacenter because it is stateless and even though it stopped running in location A, it will pick up running in location B without missing a beat, without losing any data, without the customer ever perceiving it.
So when you’re generating these applications, they have built in backup, built in no single point of failure, built in reliability, built in security, and built in scalability. All the applications are written, so this isn’t a lot people, these low code application programming languages are designed to write departmental things. Maybe they work for twenty, thirty, 40 users, but after that they start to slow down because they’re really not designed to scale to millions of users. Well, because we generate it, the design is always the same. We always designed it for millions of users even if they’re only five, it will run faster that way and use fewer resources.
The productivity gains we’re getting from this is one of the reasons we feel so good about our efforts in healthcare that we can rebuild the Cerner code base. We can rebuild the entire Cerner code base, modernize it using AI, build a modern version of Cerner by generating it, and we got all of the code for clinics operating already and next year we will have all acute hospitals. We will have rewritten everything that Cerner wrote over a quarter of a century, will have rewritten in three years. But what ours does is much more than theirs ever did. We attack the problem not just as automating a hospital or clinic, but automating the entire ecosystem.
Those are the kind of enormous productivity gains you get when you use these incredible AI tools. The example of rebuilding Cerner is fascinating because it is really not what we are doing. We are not just, yes, we are rebuilding Cerner, but we are also building accounting systems for hospitals designed for hospitals, HR systems designed for hospitals, hospitals are very unusual, are kind of fiftyfifty gig economy in and out. A lot of nurses, they will work for one hospital, they will work for private patients, they will have schedules, you don’t know how many nurses you need or doctors for that matter you need on Monday, it depends what you are doing, how many patients you are seeing, how many operating theaters are available. So an HR system for a hospital is very very different and complicated.
There is a lot of certifications that doctors and nurses and other health professionals, technicians have to get in order to do certain tests, in order to do certain procedures, in order to certain patients. Our HR system has to deal with those certifications, schedule the training, schedule when they are working and they trade shifts a lot, be flexible about doing all of that, paying them properly when they are working a lot of overtime, but also understanding when they’re only working two days a week here and four days week at another hospital across town. So we’re building HR systems and accounting systems and banking systems and this will be the one that maybe surprises you. And then I’ll go into my example. Banking systems that cater to hospitals, making hospitals loans based on their receivables.
And so here is I am describe an AI agent. So our goal was to not just automate hospitals like Cerner did or other competitor of ours do automate hospitals and automate clinics. We thought following Elon Musk’s rule that if we really want to be successful in healthcare, we can’t just automate hospitals and clinics. We have to automate the entire ecosystem. Like Elon had to build a worldwide charging network or electric cars weren’t going to work.
You know, he couldn’t just do make the cars and assume that standard oil would provide the fuel, which is what Ford you know, what Ford did. No. To build electric cars, he had to not only design an electric car and manufacture batteries and put robots in the in the manufacturing plant and figure out how to sell cars on the Internet. He had to build a worldwide network of charging stations. He had to he had to solve the entire He had to build a complete ecosystem for electric cars.
If we want to automate hospitals and clinics, those hospitals and clinics are not going to be very efficient if the people who regulate those hospitals and clinics are not also automated. If the patients who are making appointments or receiving the results of a blood test and all of that. If the patients are not also have access to that automation technology, you have to automate the patient, the provider, the payer, the regulator, the pharma companies, banks who finance the hospitals and governments who can regulate the hospitals and collect information from the hospitals. You have to automate the entire ecosystem. That then you will get a truly modern efficient healthcare system, and that’s what we were after when we bought Cerner as a first step.
Anyway, one of the most interesting AI agents we’ve ever built connects providers to payers because this is a very interesting problem and it took me a while to fully grasp this problem when working on this. And that the best possible care, what do we want the hospital to do? The hospital has to figure out what is the best possible care I can give this patient. Well, that’s kind of true. But let’s say you’re in The UK and The best possible care said that you have high blood sugar and I’ve got to put you on Ozempic or another GLP one.
Well, guess what? The NHS in The UK doesn’t pay for Ozempic. They won’t reimburse you for it and it’s very expensive. So are there any other drugs that will help you manage your blood sugar levels? Yes, there are.
And are there are some of the are they pretty good? Yes, those drugs are pretty good. And will the NHS reimburse you for those? Yes, they will. So what you are really doing when you are automating a hospital in The UK, what you are doing is you are trying to develop, work with the doctor to come up with the best possible quality of care that is fully reimbursable if the patient can’t afford to pay themselves.
So those two things are tightly coupled together. So to build this agent, so it’s pointless to prescribe Ozempic to someone in The UK who can’t afford it and because their insurance The government is the insurance company in The UK and NHS doesn’t pay for Ozempic. It’s true to that. So this is what we had to build, and we had built something that worked in The United States and in The UK and all over the world and solve this problem. The problem was best possible care that’s fully reimbursable.
That’s where our goal was. So the AI model that we built first used Rag to access the latest medical literature and the latest test results in the EHR, vital signs and all of that information, your all blood tests to assist the doctor to come up with the optimal, the best possible care. And we had to know things like, well there’s a new clinical trial for this particular type of cancer that applies to this patient that the doctor should consider putting this patient in that clinical trial. So the AI model, not surprisingly, it will have all of the latest information about clinical trials, which drugs are which drug is working better than the other drugs for this particular patient the doctor is looking at. So we will provide information to doctors.
The AI model will provide information to doctors as the doctor tries to figure out the best possible care for the patient. Then the AI model is also trained, uses Rag to access the latest rules and policies. Now in The United States, those would be insurance policies and rules depending what insurance do you have? Do you have Medicare? Do you have Medicare or Medicaid?
Do you have supplementary insurance? What are all the different things you have? So I have to figure out what is covered? What do you get reimbursed? And it is really those intersecting sets, what is the best care, what is fully imbursable, so I have to train the model on all of the insurance rules to make sure that what the doctor is prescribing is fully reimbursable.
And I have got to catch little snags along the way. Well, actually I will I do reimburse for Ozempic in The UK if your body mass index is beyond this point and I have got to make sure that the doctor knows that I can let the doctor know actually this case is an exception. This patient is eligible for Ozempic because they are overweight to pass a certain threshold and the rule says, the rule that was just changed says that they now can get Ozempic. I had to do that. So the AI agent the AI agent then reasons with all of this data to propose the best care, the best possible care at the highest reimbursement level achievable.
That’s the goal of of it. We’re in most places in the world where the government is the payer of healthcare. And the one last thing which that that we also did, and we have examples of this that we’ve experienced, where a lot of clinics, a lot of hospitals in the world, including in The United States, don’t have lots of cash on hand. And if they haven’t gotten the reinsurance reimbursements on time, sometimes they can’t provide care to new patients. They are just running short of cash all the time.
And what the AI agent can do here is give the bank all of the information about a particular collection of reimbursements assuring the bank that those reimbursements will adhere to all of the reimbursement rules and the clinic and the hospital will in fact be reimbursed 99% chance, 95% chance they will be reimbursed, you can discount it a little bit and the bank will then loan on those receivables. So it’s a fascinating set of problems. When you look at the healthcare ecosystem, the financial aspects of the healthcare ecosystem, it is very expensive to run. There is a lot of administrative duties, administrative tasks that we can automate away using AI and let patients spend more time with their doctors who are worried about care and we can figure out how to get the highest achievable reimbursement, how to get the hospital the cash that they need to continue operating, but that’s all done via automation and the doctor’s time and the nurses’ time is spent much much more efficiently with patients. As I say, AI will make things so much better for all of us.
Okay. So Oracle Cloud is very unusual. Oracle in the simplest sense, Oracle does infrastructure and applications. We do scaled enterprise applications and we do scaled AI infrastructure and we’re the only Cloud that does that. The other big Clouds, Microsoft, Amazon, and Google really do not do healthcare applications, enterprise applications, big financial applications, they don’t do that.
In other words, they develop AI technology, they may or may not develop AI technology, Google does. The other two don’t. They may or may not develop AI technology but they also they are not building large scaled applications where they are trying to automate industries or automate ecosystems using this technology. So our goals are different than those other clouds. Are a participant in creating AI technology and we are also a participant in using that technology to solve problems in different ecosystems, in different industries.
And we are obviously a very large we’re very large in training training the AI models, and we have but we have those models, a bunch of those models, some of which we trained, some of which we didn’t. We have those models in our cloud for you to use to solve your problems, for you to do AI reasoning on your private data to solve the problems you want to solve at your company. The and the we have AI code generators. Anthropic is very it’s the thing they’re most famous for, is Anthropic is code generation. We’ve been doing this for a long time.
We think we have that our new Apex code generator is the well, get one one thing I can say about Apex, everything, every application it generates is scalable, secure, reliable. Everyone. And we’ve been doing that for a long time. Now we’re now we’re doing complete cogeneration using AI and Apex. We are the only ones that are building the suites of applications to modernize not just industries but complete ecosystems.
Healthcare is one example, but utilities is another. We are entire taking ecosystems which makes things work much more efficiently. I mean, you are only as strong as the weakest link in the chain. If you have to interact with a regulator, let’s say a regulator that does clinical trials and the clinical trial regulator says, Okay, once you have finished your clinical trial, print out all the results and send it to us in boxes of paper. I will not mention any names, but that happens all over the world.
It makes new drugs incredibly expensive and take forever to come out. It’s a huge problem. So you to automate these entire ecosystems as a goal. And then agents, you to build these complex processes, these robotic pieces of software called AI agents that not only automate processes within a company but also automate processes between companies. How one company talks to another company, how a hospital talks to a bank.
Okay. Alright. That’s that’s phase one of my presentation. We’ll be serving dinner. That’s why I arrived I arrived a little late because then this way we can get we can go straight to dinner when we’re done.
Okay. So, this is a little This is looking at went into kind of how the AI models work, how they are built, how Oracle is different. I would like to just take a look at the world as I think it is going to be because of AI. And I think by and large, we are going to live much better lives, healthier, longer lives, eat better food, live in better houses. It should be a much better world because these tools are so enormously powerful.
But of the things they will do is a little bit shocking. Alright. So these are some of the things we are working on. I can go through them on the line. We are working on biometric.
Can prevent identity theft using AI to stop it. So no more logging on, no more passwords that get stolen, no more intrusions, no more data that get stolen, no more credit card, no more you have to send in your credit card and get a new one. We can make buy we can make them all credit proof if that’s what you want. Or for fraud proof if that if that’s that’s the kind of credit card you want. I don’t know of anyone who likes spending time in the hospital.
If and and the hospitals have figured out the sooner they can get you out of the hospital, the the, you know, the the better it is for them also because some of the nastiest bugs, some of the nastiest pathogens are lurking in the in the halls of hospitals and the quicker we can get you home, patients happier and and you’re safer at home. So we can we can build these IoT medical devices where we can monitor you at home as well as we can monitor you in the hospital. And even if you in an emergency you’re being transferred back and forth, the ambulance is also always connected. So your home, if you had a patient at home, they are always being monitored by hospital staff, you’ve got a patient being transported in an ambulance, the hospital staff, there an audio video digital connection between the ambulance the emergency room. Diagnostic images.
When AI reads them I remember one time I flipped my motorcycle upside down, don’t ask what was I doing and I I wasn’t that young either. I don’t even have that as an excuse. Anyway, I broke a lot of I I landed on my right side and I broke eight ribs and an MRI. I remember going into an MRI and they were counting one, two, three, four, what are doing? I’m counting your broken ribs.
Oh, great. But I was having an MRI, but the only thing they did was count my broken ribs. There was all this other data that that MRI produced, no one looked at it. That’s always the case when you get one of these scans. You’re looking for one or two things and the rest of the stuff you just ignore.
AI will find things that no one was looking for. Plus it’s just more precise and more accurate. I’m gonna actually I’m gonna go if I do this, I’ll finish the whole all all the slides on this one page. So I’m gonna just do this. Identity theft.
Excuse me. You know, we said earlier in early slides, I mean, AI knows who you are, that we recognize your face, your voice, your fingerprint. When you log in, you you sit down at the computer and say say, hi, Safra. You know, what do you wanna do today? The there’s no passwords are insane.
Passwords get stolen. People write them down. The fact that they they your password has to be 17 characters long with at least two underscores next to each other. I mean, what do you you out of your mind? What who comes you think this is a good idea?
The only way I’ll ever remember this is I write it down and put it on a sticky note right next to my computer. Why why are you this is just idiotic. So no no password. No passwords. It’s all biometric.
Much better better for everybody. Better data privacy. Credit cards, if you want them, we will have optional credit cards that are biometric. So no one can It is very hard to imitate people. So this has dramatically reduced credit card fraud.
The banks pay for all the credit card fraud. The banks don’t have to pay that. Your interest rates are going to go down. It’s going to be it’s going be better for everybody. It’s going to save a lot of money and keep your data private.
Patient monitoring, I mentioned this. We’re going to have these low cost, and I’m going to come to the low cost, how they’re going be so low cost. We’re to have these fabulous medical devices that we can mass produce that are higher quality, but they’re all medical devices should be attached to the Internet. They should go into a secure database where only you and it’s your data and you can decide who gets to see it. Your doctor, a health professional who is monitoring your care and you keep it private, but that data is immediately accessible by your doc.
And if your doc has set an alarm, if your blood pressure drops below a certain threshold or goes above a certain threshold, they want to be immediately notified. You can do all of that. You are going to get much better health monitoring home, in the ambulance, wherever. As I say, when between your home and the emergency room, the ER doctors are talking to the EMTs and the ambulance. And believe it or not, we are building one.
Building these actually building these prototypes. Will we mass produce an ambulance? I have no idea. If you told me a couple of years ago we’d be building power plant, you know, billion watt power plants, I would have would have said you need to get more rest. That’s not going to happen.
But yeah, no, we’re we’re looking at looking at doing this because and the thing is the ambulance is connected and is loaded with AI and it’s just a much safer way to transport patients. The diagnostic imaging. My wife was pregnant. We were living in Hawaii at the time and she went in for a sonogram and the tech was two things were crazy. One is the tech took a ruler and was measuring fetal development with a ruler, measuring how big the skull was and how long the spinal cord was on the screen of the sonogram.
And I said, woah, woah, woah, woah, that’s like two dimensional ruler measuring a three-dimensional shape inside floating in a fluid. Are you kidding? Who thinks this is a good idea? The computer should AI We we we should be We can do that with AI very We can do this very accurately with the computer. Even with primitive AI, we should have been able to do that.
It then got worse when the tech we were on the Island Of Lanai and the doc was actually in Honolulu and she held up her iPhone to the sonogram screen so that that the doc could see the image, the fetal image on the sonogram like, oh my god, what oh my god. No. I mean, you can’t record this in high resolution and transmit it digitally? You’re FaceTiming the image over? What the hell is it?
No. Actually, I I remember saying one thing. Look. I said to the tech, look, I promise to fix this. I promise to fix This is awful.
I can’t believe this is going on. But yeah, no. But but three but of course, with AI’s three d vision, we can measure accurately fetal development on on the sonogram. We again find things doctors aren’t looking for. Imaging right now, one of our partners looks at tumor biopsy slides and can diagnose the cancer from the image.
In a few minutes. We’re going through the entire process. It might take them do all the genetic testing and all all of these other things. It might take a week or two of a week or two of worry and a week or two without treatment And AI is going to allow us to get a get a response very quick quickly either say, you’re fine, you’re clean, everything is good or no, you need to start this drug right away. But and in in either case, where you get better in either case in both cases, we get better outcomes.
The This is this is very interesting. This is a device that we’re working on which is called a metagenomic testing device. The our ability to identify pathogens, when someone gets sick, we have we have a testing methodology called PCR that if if we kind of suspect, well, you have influenza a or influenza b or this coronavirus or, you know, COVID nineteen or we can test for some number of known a panel of some number of known respiratory viruses. But if you have something that’s odd, it comes up just as PCR negative. We we don’t know what it is.
And we have to do and what we really wanna do is genomic testing on that. But before we can do genomic testing on it, we have to culture it. We have to culture it and wait several days and it could take a week or two weeks before we know what you had. Either it went away or you did, It was particularly bad. The This is a new sensor, a new a new sensor that will simply do gene sequencing.
It will do gene sequencing of everything in the sample. So you take you take blood and obviously in your blood are your own genes. Your own genes. Well, included in your own genes are something called ctDNA, circulating tumor DNA. So in everyone’s blood, if you have cancer, even a stage one, early stage two cancer, you have small fragments of circulating tumor DNA that we can discover by gene sequencing everything alive in your blood.
The problem with the circulating tumor DNA and people have been trying to work with it in the past is your immune system will cure a lot of cancers without you ever knowing you have them. They kill immune system clears up a lot of cancers before you’re you’re ever symptomatic. And if we keep telling you, oh my god, we found this cancer, we need to start treating you, in fact, no, we don’t. Your immune system is going to clean that up. Do absolutely nothing.
So the false positives are deadly in this. However, with AI now, we can look at the fragments and distinguish between false positives and a real serious problem that you should start treating immediately early. So this has the promise of giving us very, very early cancer diagnosis which everyone knows leads to a much higher likelihood of a positive outcome with the cancer. It also will allow us to find any bacteria, fungus, any virus, any living organism that you’re infected with, any pathogen that you’re infected with and tell you exactly what that pathogen is even if it’s novel, like COVID nineteen was novel. So we know how to treat it.
It will tell you that pathogen is resistant to certain antibiotics and which specifically which antibiotics it’s resistant to and which antibiotics we should treat you with. Now, we actually have a partner here know I think that went on earlier that talked about that say is solving, working on that same exact problem, which is very very important. If you imagine this device being a low cost device that’s in the pathology departments and hospitals all over the world so we can do this one blood test and find whatever pathogen you’re infected with. If we had that, we never would have been caught off guard with COVID nineteen. We would have had early warning.
We would have discovered it far far before we we discovered it. It would it that those those metagenomic sequencers would be the perfect early warning system for pandemics. That’s why we’re working on them and that’s why we need them. Okay. Building all of these medical devices, building them reliably, if you want to put this metagenomic sequencer in every hospital all over the world or most of the hospitals all over the world, they can’t cost a million dollars.
They can’t cost a $100,000. You have to make them cost effectively. You have to mass produce them. You have to make them in robot factories. If you make them in robot factories, you get much higher quality and dramatically lower costs.
I think we have a video. This is this is a disc, a disc where the test you actually put the sample into the disc, spin the disc, and run all of these tests on the disc. Actually, actually, I I think that that video when I saw it lasted three minutes, and Maddie told me, no way am I putting that whole video in your presentation. But it is it is remarkable that there are no there are no people in the room when the device is being when the device and the disc and the disc is being built. Here is another here is another one.
This you will be happy we don’t have a video, just have a couple of pictures. Growing inside growing inside reduces the amount of water that we use to grow food by 90%. That in itself is essential because we are running out of food, by the way. We are running out of food in the world. I think in 02/1950, Africa will be our most populous continent.
Think about that. Asia is by far Asia has India, China. Know, those are big countries with a lot of people. Africa will be larger. We need to produce much more food than we currently do.
We are going to run out of water. We are going to run out of arable land. We can’t keep taking habitat and converting it to farmland. We have to be much more efficient and by growing in greenhouses and moving plants around, plants only need a lot of room right before the few weeks before they are harvested. Otherwise, they are going to grow in much more confined areas.
If you can move the plants around, you use up much less water, much less space, you save habitat. If you are growing indoors, you can grow by urban centers. I mean, I I don’t suggest you put a greenhouse right in the middle of New York, but you can put it 50 miles away from New York and you are growing near population centers so the CO two output for transporting the food to population centres is greatly reduced, the food is much fresher. Again, in greenhouse, there is a harvest every morning and it is delivered the grocery that afternoon and is eaten that it could be eaten that evening. So the food is food is much fresher, it’s lower cost, it’s more nutritious, it’s tastier, and we’re actually building these things, these robotic greenhouses.
There should be a picture coming up. Yeah. That’s real. Just hold that. This is also, as I pointed out to Elon, this is also a Martian habitat.
The this building, which is very large, you can imagine as a greenhouse or just ignore ignore and and and that yellow thing kind of on the lower part lower part is an overbought is that’s a rail system that moves the plants around from one location to the other. No human beings are allowed in in the growing area because human beings contaminate the growing area. We literally lift the plants up and move them into a harvesting area where people are allowed. We don’t allow any but also the growing area has is very very high in C 02. It’s very humid.
It’s very unpleasant for people. It’s very very high in high in C 02, is good for plants, not so good for human beings. The But if if you took that same building and the the building By the way, there’s there’s no structure. It is an air pressure building. So the atmosphere, it’s it’s a positive air pressure, so basically think of fans keeping the the pressure inside the building as higher than the pressure outside the building and that’s what holds up the roof which is made of e e t f e which is the most sunlight transparent material known known to man, also quite strong.
So you could take the And and this And those are steel cables. Those are steel cables in in the arch in the arches are anchored to a to a concrete footing around around around the base. So literally, you have a robot dig the footing, you snap the steel cables onto the fiducials on on the on on the footing and then you turn the fan on and you inflate the building. The yeah. You fold the building up the building is fabric with steel cables.
You fold it up in in nice nice nice packages and you transport it to where you’re building it or you transport it to Mars on one of those big rockets and and then then Elon can build his house right in the middle of that and have beautiful rose gardens and all that all of that other stuff. It’ll be lovely. But I’m not going. I will I will I will go to go to this one, which is the the first ones are are are are in California and Texas, which is way closer than Mars. Here is another picture of the same building.
They are big and the green areas are the harvesting areas and the walls lift up where the where the trucks arrive to deliver deliver the food. Okay. This is this is gonna be shocking. The You can the first thing we did and we’ve actually done this. We’ve actually done this.
It’s actually a company that I’m involved with called WildBio. It’s part of the Oxford company. I’ve got an institute at Oxford called EIT. First time I’ve ever put my name on it something or the family name on something. EIT and one of the companies we have is this company called WildBio.
And what they did, the first thing they did was they modified wheat plant, which is a grass. They modified wheat to have it produce 20% more food per acre, more grain per acre, which seems like we are running out of food, that seems like a good idea. Now, it’s really interesting if you produce 20% more grain per acre, what what wheat does, basically, it takes c o two and sunlight, mixes them together to create food. So if you’re growing more grain, you’re consuming more c o two. Now where that c o two ends up is really if you have AI designing the wheat, it’s really up to us.
So we’re we build this wheat that’s much more efficient with photosynthesis than conventional wheat. Once we’ve absorbed the c o two into the wheat, we could choose to take that c o two and convert it into calcium carbonate. By the way, that’s exactly how coral reefs get built. A coral reef is converting c o two and sunlight into a structure, into a mineral called calcium an inert mineral called calcium carbonate. So we grow a lot of wheat around the world every every every spring.
We plant several Amazon rainforests worth of wheat. And if you want to, you want to, you can not only produce more grain, you can convert more c o two directly into calcium carbonate, therefore removing it from the atmosphere forever. So if you want to manage I know there are all the people always interesting ideas on how to manage manage the climate and manage manage the atmosphere and manage atmospheric c o two. But in this particular case, you could remove if you wanted to go from the current level of 440 parts per million of CO two in the atmosphere, which some people think is too high, and reduce it to 400 parts per million, you can do that simply by having the wheat and the corn and the soybeans and whatever producing you know, converting c o two into calcium carbonate and you can manage the c o two level in the atmosphere to whatever level you deem appropriate. And if you think the sweet spot is 400 parts per million, that’s right.
Now someone will say, no, no, we want to get rid of all the c o two in the in the atmosphere. Well, pack a lunch because if you get rid of all the c o two in the atmosphere, all the plants will die on on the planet. So don’t go to zero. That’s a really bad idea. But it the the the sweet spot in terms of stabilizing the climate probably is going from four forty to 400 and it is something we can do and it is basically free.
Basically, is no cost in doing it. It And is just a natural process called biomineralization and we could use our food crops, we could actually increase the food yield while lowering CO2. This is what I mean by AI. AI is a pretty amazing tool. There are a lot of things, a lot of problems we can tackle that we’ve been unable to solve for a very, very long time and it’s very, very contentious within our society.
But you absolutely have the ability to do this. Corn. We’re working on also working on corn. Another huge problem with agriculture is nitrogen fertilizer. You fertilize all these crops to increase the yield.
The problem is fertilizers are made of nitrogen and it rains and that you have huge nitrogen run offs into river basins and into the ocean and that pollution does a lot of a lot of damage in our environment. Rather than using fertilizer, nitrogen fertilizer to nourish the plant, the atmosphere has got a huge amount of nitrogen in it. Why don’t you simply engineer the plant to take the nitrogen directly out of the atmosphere? And we know how to do that. There is an enzyme in in in the world called nitrogenase and nitrogenase quite literally takes atmospheric nitrogen, does it with soybeans for example, it’s unique unique to soybeans, takes atmospheric nitrogen and and uses it as a nutrient for the for the plants and you don’t have to use nitrogen fertilizer.
You can get rid of all the nitrogen fertilizer. In Africa, no one can afford, and I shouldn’t say nobody, a lot of farms can’t afford to use nitrogen fertilizer. But even the ones that can afford to use nitrogen fertilizer, it’s a waste of money and it is damaging to the environment. So you can engineer the plant to get the nitrogen directly from the atmosphere. And the plant is just as tasty and just as nutritious and just as healthy.
Getting the nitrogen from the atmosphere is getting getting the nitrogen from fertilizers that’s been added added to to the the to the soil. Another problem, AI makes it easy for us to solve. I’m you’re gonna be very happy that last slide. This is the my my last slide with words on it. The I have one more video, one more picture, and then I then then the three of you who are gonna stay can ask questions.
So autonomous drones, we’ve well, anyone who’s looked, we’ve seen the way drones have been developed in Ukraine for military purposes. Fortunately, drones have have very wonderful uses beyond how they are being used in Ukraine. The war the war in Europe, is just terrible. The We built an air traffic control system for drones and we are actually using drones to deliver blood samples from clinics and taking the blood sample by drone to testing laboratories. And we built what’s what we call an RFID specimen vault, which we put an RFID tag on which so no one knows this is Larry Ellison’s blood or, you know, or what or whatever.
They just know there’s an r there’s there’s an RFID tag on the blood and then the the test results go go into the into the cloud and eventually, make it back to my doctor and to me, the the results. But otherwise, in the chain of custody, no no one can no one can distinguish who my personal privacy is not compromised at all by doing this. And but also, the other problem is, you know, sometimes they do a great job of protecting your personal privacy by losing your blood sample or thinking it was somebody else’s blood sample. That’s not a great way to protect our personal privacy. So we we we built a specimen another another thing we built are these specimen bolster that to take samples from, you know, from the hospital, from the clinic to the lab where the results then go go go into the cloud.
But the other thing that the drone that the drones can do is they can detect fire forest fires immediately with infrared camera cameras. They can even even figure out who set the forest fires. Tragically, the Palisades fire, a number of the fires in California were set by arsonists. I mean, unbelievable tragedies that we can can detect the fire immediately and and start to fight the fire immediately and we and if someone set the fire, we can figure that out that out too. And we don’t we shouldn’t be police cars chasing chasing other cars around around those high speed chases.
While they look I mean, videos look kinda cool, they are very dangerous for not just the the police but for civilians in cars nearby. We can have drones follow those cars. It’s it’s way better. Okay. I’m going to now go to my last picture.
That’s the RFID specimen vault over there, and last video will be coming up. There it is. Sure enough, So you can deploy these. It’d be be great in the in the Palisades. It’s it’s dry.
It’s the dry season. You send the drone that you send the drones up, you can have a series of these cars, you got a lost hiker, you know, out in the wilderness or some something like that. They’re portable. I think it’s gonna now land, and then if it gets down safely, I will take my first question. It’s a video.
It’s gonna get down safely.
This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.