An Open Discussion About The AI Revolution

An Open Discussion About The AI Revolution

Most Mondays at 5pm ET, Bryce and I do a livestream on YouTube with my old friend, John De Oliveira, who’s been working the the AI industry for decades (we won’t be doing the AI Livestream next week, as we travel to Dallas to meet Dallas News). I found today’s call to be very insightful and I learned quite a bit today so I thought it’d be helpful to get a transcript of the whole call for Trading With Cody subscribers. I tried to get both ChatGPT and Google Bard to do the transcript for me, but they weren’t quite up to the task yet. So I paid Rev.com to do it with a human transcriber. I’ve lightly edited at least the first part of the transcript. You can watch the whole video here on YouTube.

(By the way, this week’s chat will be at the normal time on Wednesday at 3:00 pm ET in the TradingWithCody.com Chat Room or you can just email us at support@tradingwithcody.com.)

In this transcript, Bryce, John and I discuss recent developments in the AI industry, including more on the controversy surrounding Sam Altman’s departure and return to OpenAI. We also discuss the potential impact of AI on various industries, such as customer service, supply chain logistics, and small businesses. John explains the concept of artificial general intelligence (AGI) and its potential applications. We also touch on the concern of AI taking over jobs and the value of foundational AI models. The conversation concludes with a discussion on the potential threats and benefits of AI, including the spread of fake news and the need for trust in an AI-driven world.

Cody:
Welcome everybody. It’s the AI revolution with J Do, J DeO, Bryce Smith, Cody Willard. We are every Monday at 5:00 PM Eastern, we meet with John de Oliveira who has been working in the AI revolution for

John:
Ever

Cody:
30… No, 26 years. 26 years, is that right? You left University of New Mexico to go work at Cyc AI-

John:
In ’98.

Cody:
98? No, before that. ’96. Because I came to New York in ’96.

John:
I got married in ’96.

Cody:
You weren’t working at Cyc then?

John:
No. And you were at my wedding.

Cody:
Yeah, I came home for it.

John:
Oh, you did? Okay. Yeah. Thank you for that.

Cody:
Of course.

John:
I’m a little late in thanking you.

Cody:
Yeah, should have thanked me at the event, but that’s all right, ha.

John:
Ha, I’m sorry.

Cody:
At any rate, guys, look, last week we talked a lot about the big news that Sam Altman got fired and then rehired, the AI, ChatGPT, OpenAI board called him a liar and then they were like, “Ah, it’s okay. You’re a liar. Come on back. You can run our company anyway.” Microsoft CEO was like, “Hey, we’re hiring him to run our AI division.” And then 24 hours later, the CEO at Microsoft was like, “Ah, never mind. He’s going back to ChatGPT.” I still think that there are major shoes to drop. I think there’s a lot more to the story. We still don’t know why the board called him a liar, but that’s not what we’re talking about today.

John:
Well, you just did talk about it. Now we just got to leave it there?

Cody:
You can throw more in, give us your last bit on it and then we’ll move on and ask questions about AI in general, because I have a lot of new questions.

John:
Are you caught up on Helen Toner?

Cody:
Tell me.

John:
She was an OpenAI board member. Really, she had come out with, I guess, a paper which was talking down OpenAI and how Anthropic was better and here she is on the board of OpenAI. She basically had revealed a motive of wanting to, or how she could kill OpenAI. I mean, this is just another angle. There’s all these different angles that have happened, but she was on the board. She was one of the ones who voted him out and it’s weird from so many angles. The other we talked about was just Ilya was not in his right mind completely. He’s doing things like burning effigies for… Yeah, there he is. What does it say?

Cody:
“Feel the AGI.” Chant it with us Bryce. “Feel the AGI. Feel the AGI.”

John:
Feel the AGI.

Cody:
Come on, Bryce.

Bryce:
No.

John:
This is who we’re trusting in our future with is, but he was also voted against Sam. There’s enough weirdness in it to go around.

Cody:
There was the Quora CEO who’s also on the board had a product that he had just built on ChatGPT that then when they did their big reveal with Microsoft two or three days later, it completely subverted his entire business model.

John:
And yet he’s still on the board, so he must have not been one of the biggest… He could have even not… It’s possible he didn’t vote to vote Sam out, but they got continuity by keeping him anyway.

Bryce:
Is Ilya still on the board?

John:
Ilya is not. What I wasn’t clear about, is Sam on the board?

Cody:
No. He’s off the board, last I heard.

John:
So anyway, it’s a big weirdness.

Cody:
John, it’s all a mess. There’s more to the story. I’m telling you.

John:
Okay.

Cody:
There are cockroaches coming out from ChatGPT.

John:
The other piece of the story that people are talking about that I don’t give much credence to is the whole Q* thing. You familiar with that?

Cody:
I am. But explain to everybody AGI and how Q works into this and everything else.

John:
The idea is that there was this recent discovery inside of OpenAI that is, it’s like, oh, they have AGI or it’s a big step toward artificial-

Cody:
And AGI, tell everybody what AGI is.

John:
Yeah, it was into the artificial general intelligence, so the ability for computers to be behaving, they say above now, they keep changing the definition, but above the level of people. Which it basically already does when it comes to any kind of answering questions or knowing stuff, it’s above any human. But in terms of for it to really act like a human, it’s got to have a memory, know what it did do, what it’s going to do, that the things it does affect things in the world. It doesn’t have to be conscious, but it has to have a lot more than it does have.
And the big thing that Q* brings, which is this new algorithm inside of OpenAI, the argument, the discussion is that where ChatGPT has been able to be creative by piecing together existing things, it’s not, ChatGPT is not learning and coming up with new knowledge that wasn’t there. It’s not doing scientific discoveries, it’s not figuring out how math really works or these kinds of things. And so this new thing is almost like, let’s put it this way. It’s a combination between the stuff that created AlphaGo, the computer that like winning at chess then won at the game of Go, which is much harder than the game of chess.
It did that through what they call an adversarial network. It would internally create two players and have them play against each other a million times and it would, by itself, learn how Go works and how to win. And it beat the world champion about 10 years faster than they thought it would. So taking that kind of technology and combining it with GPT, which just knows everything there is to know on the web, and that’s where everyone freaked out and said, “Oh, I guess AGI is around the corner.” I don’t believe it.

Cody:
Why is it that… It didn’t seem to me like that AGI threat made sense as the reason they removed Sam. It was this next step of that logic is that Sam was embracing it and going to release it to the world before they could control it and make sure that people weren’t going to destroy the world or something. That doesn’t seem like the reason, the real reason why to me, and it doesn’t sound like you believe that either.

John:
Well, I believe they believed it. I do think it’s a possible reason that the Doomers, which you call these folks like Helen Toner, would say, “Oh no, we’re on the brink of disaster because Sam is one of these people who wants the world to have this technology.” And by the way, I want this technology so that’s why I’m on the side of Sam on that one. But they’re like, “Oh, he’s about to hand out something and the world’s going to fall apart if he does.” I don’t believe that they’ve made that much of an advancement. I don’t believe the world’s going to fall apart. I do believe that there are people who believe that and who would do stuff like try to replace him.

Cody:
Including people on the board? Do you think the people who were sitting around for the last year, with ChatGPT making these advancements, were scared enough that they might’ve removed Sam for that reason?

John:
I know for one of them that it’s well documented that that’s the way she thinks about it.

Cody:
Why haven’t we seen Helen Toner on CNBC or on 60 Minutes? Where’s the CEO of Quora?

John:
She seems pretty low profile.

Cody:
Why aren’t we interviewing him? Why can’t we get more information out of these people?

John:
She tweeted some inane kind of tweet right after this was all over like, “Oh well now I can go get some sleep.” And it’s like, like you did something good? It was so tone deaf, what she tweeted, that people were just like, can you just leave? Just delete yourself, go away. So yes, it’s interesting that she wasn’t, but I don’t know if she was asked to be interviewed.

Cody:
What’s the CEO of Quora? What’s his name? I can’t think of it. I’ll pull it up. The other guy on the board.

John:
Yeah, if you said it I’d say that. Yeah, that guy.

Cody:
You sent us a text using the name the other day. Adam De’Angelo.

John:
Yeah, Adam De’Angelo. Not the Twitch guy. That was something else.
The Twitch guy was also very much on the safety side, but he wasn’t on the side of knocking them out for no reason. There’s actually, I read one story that there was some district attorney who got in touch with the board and said, “What’s your evidence?” And when they couldn’t come up with anything, there was potential for actually wire fraud charges against some of these board members for coming out and saying, oh, here’s this bad stuff happened that we had to kick them out and then having nothing to go on.

Cody:
All right, well John, here’s the questions we’d like to talk about today. Is AI advancing quickly enough that large corporations and small businesses, maybe one more than the other, are going to be able to apply AI to customer service, to supply chain logistics, to their entire business model, every facet of it, HR, anything? That costs for giant corporations are going to go down quickly over the next 3, 4, 5 years? Small businesses will be able to utilize AI, a small craft brewer in rural New Mexico will be able to utilize AI to do marketing and distribution and get into more bodegas and small stores and grocery stores? Is that realistic? To look out over the next 2, 3, 4 years and see an impact on the profitability of corporations and the prosperity of small businesses?

John:
I saw one projection that over the next couple of years it would have a, what did it say, 18% impact on GDP because of all of this. And I was like, no way. It’s way more. Here’s the thing, is they don’t even have to release anything else. We don’t have to say, is AGI going to happen? Is GPT 5 going to happen? With the stuff that’s already there, I see tremendous productivity gains. I see companies doing things like looking at a worker and saying, say a white collar worker, and saying, “Can I 10x the productivity of this one worker right here? And if not, I have to fire them.” Because the workers they’ll keep are the ones that they can do that level of gain with. And it’s not because the worker is that incredible, it’s because it’s going to be… They say, will people lose their jobs because of AI? And as Emad Mostaque just said, the head of Stability AI just said, AI is not going to take jobs from people. People using AI will take jobs from people not using AI. And that’s, I think, what you’ll mostly see.

Cody:
Why is it that at this stage it’s the giant cloud/internet companies like Microsoft, Google, Meta, Amazon that are… The only advancements we’re seeing in real time is CapEx expenditures. These companies are ramping up expenditures and going from spending a few billion dollars on infrastructure for internet and better web services and things, to tens of billions of dollars in order to advance their own AI LLMs and their own machine learning within AI, but the spending is happening. But it feels a little bit like 1999 or 2000 when all the companies were building giant internet server inside of their own networks and serving their web pages and small business would buy one server and build a couple of websites and then they connect it to the internet and people would use dial up to access some of the stuff. But pets.com spent hundreds of millions if not billions on building their internet service to sell pet store stuff and then they could never ramp the sales and the product, the services that they were selling fast enough to capture how much they were spending. And by the time Chewy came along and was actually making profits on selling pet supply, pets.com had been bankrupt for 20 years.
It feels a little bit like we’re in this pause where you and I are right now discussing the potential for AI to change the S&P 500 profitability metric forever, but Coca-Cola is doing nothing with it but playing right now.

John:
So it might be a good analogy. I wouldn’t take the bubble piece of that. I don’t agree as much. Everyone’s like, oh, this is a big bubble, because certainly there’s a lot of expectation here. There’s going to be winners and losers. There always are. I think we see that kind of expenditure on the side of these cloud companies and a few of them there because they see it. They’re first. They know. They’re at the platform side of it, they see what’s coming and they’re like, “I’m in. I am going to spend what’s necessary because the other pieces will make use of this.” But then it gets down to the thing we talked about with vacuum cleaners and electricity. It’s diffusion of the technology. How does it get out there, how does it get widely used? And certainly there’s a lot to be figured out.
That’s an area I’m interested in doing something with is there are a lot to be figured out for how to make use of it, but ChatGPT has a hundred million users and it’s much more sophisticated than Google, but it also is pretty easy to use. You go, and as I’ve said, I don’t think Chat is the end all of anything. It’s more like, it’s like electricity and it’s you get to plug in and see lights happen and you’re like, oh, that was cool. So you can do stuff with Chat for sure, but there are many, many more apps to be built using that new electricity, that new power, and I think it’s up to us to be figuring out what are the vacuum cleaners of today? What are we going to make that’s going to make use of it?
There’s no doubt in my mind that the power of this thing, what it enables. There’s so many things that you would’ve needed people for and you don’t to go and do. And we’re still talking about it like it’s all this little ChatGPT conversation. Think of GPT 4 and whatever, how powerful it is, and think of there being thousands of those working on your problem. Think of, instead of just Chat and say, “Oh, I’m going to make these characters that can chat.” What if that character is a worker and it specializes in marketing and this one over here specializes in pricing strategy and this one specializes in customer support and you’ve spent some time telling them what you expect of them and then they’re hooked in to go and do things on your behalf. It’s like that.

Bryce:
John, one of the things I’ve been hearing a lot about and thinking a lot about is people are now discussing where value is going to accrue in this new tech stack of everything being centered around AI from the silicon base layer up to applications at the top and in the middle somewhere is ChatGPT and what they’re calling foundational models. I think one of the things I’m thinking a lot about is, I heard someone say that the Facebook strategy and the open sourced foundational model, what ChatGPT currently is, isn’t actually where all of the value is going to accrue and that’s why Facebook is making it open source at that level because that’s not actually what’s really important and valuable. It’s going to be the layer or two above that, above the infrastructure level and between the app level and including the app level. Do you agree with that framework or do you think ChatGPT and those foundational models are actually the big value creators?

John:
No, I think I’m pretty much with you and I think they’re, ChatGPT and foundational models, are kind of the value enablers and then it’s not where we’re going to get and create and collect value or most people, we’re that level up or two levels up from that. And so that’s where it’s interesting. It’s not like there’s no value to those, but whatever that is going to get captured by five companies.

Cody:
Even to illuminate for ourselves and including the people who might be watching. When you think about, say, brokerage firm services or money management people, Edward Jones people who help people try to manage their money and try to plan for retirement and things like that. So much of what they do as a worker at Edward Jones or one of the others is programmatic. They’ve been told, here are the programs and the boxes that people fit into. Make sure you fit your people into… Each one of your clients should fit into one of these boxes. Dot your I’s, cross your T’s, cover your bases, cover your butt, and make sure that you’re not stealing the money or doing something nefarious or whatever it might be.
That’s a perfect example of what AI models are probably right now being turned on at Robinhood, for example, that they will simply turn that model on and allow AI to look at what your income levels are, what your assets are, what your savings are, what your savings rate is, what your potential for inheritance is, and you take those boxes that the worker at Edward Jones was previously doing and you hoped he did it well, and you’re going to have an AI model that is expert in doing that and it will do it well.

John:
On the programmatic side of it, you could say, okay, it can operate with the same boxes that you fill in this box, fill in that box and it does the right thing with it, but what it can also do is the thing that before only people could do, when they were talking about we want get that water ski or set of whatever, talking about things from their life and be able to turn that into, oh yeah, there are going to be some expenses like that, knowing what that means. Being able to take it from a human standpoint, not just in, oh, it can do what a computer does. It can do what a person does and still follow those rules.

Cody:
That’s where we’re hitting on this incredible advancement. And if you’ve got one worker at Edward Jones that used to could handle 100 clients, theoretically he should be able to handle 3, 4, 5 times as much, maybe not 10 times as much in this example, because he does need to interface and have a conversation with people, but some of that stuff on Robinhood for example, will probably be completely inside the app. You won’t be talking to a person most of the time.

John:
Yeah, I mean that’s what we do in my company is set up these avatars. You’re not talking to a person. And it’s still obvious right now. We’re getting to where maybe by next year it’s not obvious. It’s-

Cody:
Video generated AI interface.

John:
Yeah.

Cody:
Is that the right term? Did I say it correctly?

John:
Works for me, yeah.

Cody:
Okay. Another example that Bryce and I are thinking through right now is, do you know what Autodesk is? And AutoCAD, I’m sure?

John:
It’s been around forever.

Cody:
AutoCAD is still the primary software that people design, bridges, cars, cameras, hats. If you’re going to design something from scratch, especially if it’s technical, maybe the hat wasn’t the best example unless it’s got a flashlight in it or something.

John:
CAD/CAM.

Cody:
What’s that?

John:
CAD/CAM.

Cody:
CAD/CAM

John:
Computer aided design, computer automated manufacture.

Cody:
If you were to take all of that data that people have been building, let’s just go with bridges. If Autodesk has cloud files of hundreds or thousands of bridges that have been built, designed and built successfully over the last 10, 20, 30 years, they should be able to create an AI model, AI system, that would then create initial designs for bridges and then the guy who used to actually design the bridge will tweak and tell ChatGPT or whatever that AI system is, it’ll tell it to change it, tweak it, make it a little bit different or more whatever.

John:
Test it.

Cody:
The time for designing the bridge gets cut by 90% and the accuracy of that model that it built for that bridge is probably twice as valid and good and secure than it would’ve been if it was just a human being and a couple of people checking his work.

John:
So any job, these things are already smarter than any person. Within a few years, who is it? Probably was probably Emad Mostaque also saying, but he said within a few years, within five years it will be malpractice to be a doctor who doesn’t get a second opinion from an AI when giving a diagnosis. It’ll be malpractice. It’s not just like can we use it? It’s like you can’t not use it.

Cody:
Right now, take it to an even further level. What would happen if you’re trying to do it with semiconductors instead of a bridge? Is AI going to be smart enough that-

John:
To build itself?

Cody:
Maybe it needs to still be an engineer PhD level guy who’s interfacing with that AI system, but he could say, “Hey, I’m trying to design a full self-driving chip and it doesn’t need to have such and such storage memory, but it needs to have a lot of DRAM and it needs to have a lot of throughput and visual processing information.” He could tell the… Or even maybe he doesn’t even need to tell it that. He’s like, “Hey, I need to design a full self-driving chip,” and the AI can do it and then he can make it better or tweak it and confirm that it’s delivering the goals he wanted. Is AI going to be advanced enough to even go down to the semiconductor level and not just bridges that you’ve been around for thousands of years?

John:
I have a good friend who designs chips and I’ve had a lot of low level conversations with him. I really enjoy the topic. I was surprised at how far down language, computer language, goes to do with chips. So all the way down when you think that you’re going to be talking about where you draw a circuit or something, it’s actually a DSL or domain specific language that you’re using to describe there’s going to be this many transistors within this range of this memory unit that whatever. And so because it’s that way, it’s even sooner that the kind of language models can deal with that probably already are starting to do.
So it more comes down to us all starting to use this. Or before it goes off on its own and goes and does something like I’m going to just go make a chip or whatever, it’s going to provide the productivity to the people who are already doing that so that they can have that conversation with the AI to say, “I want to make this change, that change.” Music came a long way in the last two weeks. I don’t know if you’ve seen the latest, but I mean it really… You’re not going to go, oh, that’s the greatest song ever, but genre with song lyrics and you’re hearing the singing of it and it’s got the refrain and it’s got the echoing off the different piece of it. I could send you a link, but it’s like, wow, in just a couple weeks. It used to be really hard to like, oh, that sort of sounds like music. And now it’s like, no, that’s a song.

Cody:
John, before we get off, if you would, before I end the stream at the end of the call on the right side of your screen there, there’s a post to comment. If you would, put that link in there and it’ll show up on YouTube for the people watching. But don’t do it this minute.

John:
I have to go track it down.

Cody:
Yeah, that’s what I’m saying. Before I cut the show off, I’ll make sure that you get that link on there. We’ll give you some time at the end of the call. Before you do that though, where was I going with that? Bryce, you got other questions?

Bryce:
No, I was just going to say, I told John, I think it was a couple of weeks ago, but Nvidia, their computational lithography system that they use to design chips, they made it better. They used computational lithography to make a chip and they used that chip to power AI that made the computational lithography better, sent it back to ASML who made a better chip. It’s already that exponential chip design.

Cody:
Virtual cycle happening.

Bryce:
AI designs the chip. The chip makes the AI better, the AI designs the chip.

Cody:
So John, when we talk about the threat of AI on society, don’t look for the music video right now.

John:
I found it already.

Cody:
Okay, great. Put it in there if you would.

John:
There it is.

Cody:
Thank you. That’s not in the chat.

John:
Oh, I got to switch over to under comments?

Cody:
Yeah.

John:
Join the chat?

Cody:
I don’t know what you’re looking at. We’re in StreamYard. There should be a place that says post a comment-

John:
I’m in it. And then StreamYard it said private chat and I put it in there.

Cody:
No, I don’t know what that is. I see it. I’ll grab it. You want comments.

John:
Oh, I switched to comments, yeah. I’d have to connect to YouTube.

Cody:
All right. I got it. I put it in there. We both put it in there now. Great. We want to be redundant. Oh, I put it in the chat. Ah, frick. All right. There we go.

John:
You put it same place I did. Okay.

Cody:
All right.

John:
Do you remember what you want to ask?

Cody:
It’s on YouTube. You people watching, it’s there twice now. I hit enter twice. It works. We’re redundant. We’re redundant. We’re redundant.
Now, John, when we talk about AI being a threat to humankind, is it the scenario where we’re starting to hit edit right now? Is it possible, what people are worried about is at some point in the future AI says, “You know what? I’m going to design new AI chips and I’m going to send those plans over to that Taiwan semiconductor factory and I’m going to make sure that it gets shipped to… What’s the freaking threat, man? What are we so worried about? I’m lost whenever people are like… And look, people I respect, like Elon Musk, I certainly respect his opinion about AI. He’s worried about AI killing humankind if we don’t do it right. What is the threat?

John:
Yeah, he’s also worried about we better get to Mars soon because human race is about to wipe itself out and he’s entitled to that opinion, and I like everything that happens as a side effect of him thinking that, but I don’t share that level of concern, especially… Here, think about it like this with AI. People, I think, get confused to think that AGI means sentience. It means that this thing is going to know what it is and care about its own existence. It will be so okay with being unplugged for decades to come. It’s like, yeah, whatever you want to unplug me? You want to stop me forever? Whatever. So the only way it becomes a problem-

Cody:
Is it HAL that they’re scared of? What’s the movie Odyssey 2001?

John:
2001 Space Odyssey. Yeah, I mean it’s lots of science fiction. It’s Terminator.

Cody:
Terminator, of course. Yeah.

John:
If you keep in mind that agency and sentience and things like this are different than intelligence and think of it as more being like the Star Trek computer. You talk to the ship and you say, “Hey, figure this out.” And it’ll figure anything out or make this happen and it’ll go do it. The problem is people. And giving people power that is like, Hey, everybody want a nuclear weapon? Here, everybody have one. But that’s not really quite accurate. And there are far more normal, non psychopaths than there are psychopaths and there’s plenty of time to make AIs that watch out for AIs. So I’m less concerned about the whole… the more likely thing to, and people worry about this and I don’t worry about it that much, but it’s more realistic to worry about is the fake news thing. You get an election and the day before the election, all these videos pop up of Trump doing this and Biden doing that and saying that, and neither of those things ever happened.

Cody:
Even right now. When my Twitter feed is full of Cybertruck AI and other things I find of interest, but certainly there’s a lot of Hamas, Israel war stuff in my Twitter feed and I’m never sure, I don’t watch the videos or look at the pictures from the, supposedly from the war, because certainly on social media, I don’t believe it’s going to be real. I hope that New York Times and Fox News and Wall Street Journal and New York Post and the Washington Post and my other news sources that I do consume are vetting the content better than Twitter is going to be. But I also don’t know that… New York Times famously said there was weapons of mass destruction and-

John:
Yeah, I mean they get fooled too.

Cody:
I don’t trust anybody and it gets to be less and less trustworthy. I had one of my investors at the office today and he said he has quit looking at anything on social media or news either because he thinks it’s all lies. He’s lost faith in any of it.

John:
I feel like an interesting question is, how can we use this new technology which has all these possible falsehoods happening, whatever? How can we use it to our advantage to help us find what’s true or real or to set up structures of trust among people to where… I trust you. I can go contact you and be like, I know you don’t know everything about everything, but a lot about a lot. And I can go, Hey, what do you think about this? And it’s like, how do we extend, how does that scale? How does trust scale? And that’s some of the stuff where maybe we use these, this stuff is so smart, it can maybe help us figure some of that out.

Cody:
Amen. Bryce, last words.

Bryce:
I don’t know. I saw a video or it was a partial documentary of a company that’s using drones and building, they call them hoards of drones and they can be programmed with AI and you upload an image of a known terrorist and it searches buildings with 100 drones to find that person potentially in a city hiding amongst civilians. So there’s real capabilities of AI.

John:
But the problem is in that scenario, they could also, the bad actor can be like, Hey, I want to go get those good, honest people with those drones.

Bryce:
Right. I mean there are some dangers out there like that I see.

Cody:
Yeah, that’s a good one. That’s a good one. Or not a good one, but that’s a good example. That’s a good example of a terrible thing. John-

John:
There have to be rules and there’s going to continue to be the attempt to… how can governments keep up with this technology that is just so beyond them?

Cody:
I think this was our best episode. I’m going to actually submit this and get a transcript and share it with my Trading with Cody subscribers. John, I always appreciate your time and energy. Bryce, thank you. John, thank you. We’ll see you guys, same bat channel, same bat time next week, 5:00 PM Eastern.

John:
Yeah, I’ll check with you. I’m traveling next week, but I think I don’t leave till Tuesday.

Cody:
Oh, you know what? We are too. We’re going to be on our way to Dallas at this time next week. So we will be here two weeks from today. We will not be doing it next week at the same bat time, same bat channel.

John:
Two weeks from today I will be on vacation, but we’ll figure it out. We’ll let everybody know. How’s that?

Cody:
We’ll do it without you. We’ll pontificate.

John:
I’ll do it from a cruise ship. Sarah’s [inaudible 00:38:24].

Cody:
God bless. Hey man, thank you. Be safe on your travels. We’ll talk soon.

John:
Thanks. See you guys later.

Bryce:
Bye, John.