A Conversation With Russell McIntyre
From Capitol Hill to the mortgage industry, AI is everywhere. It’s even finding its way into the halls of government. But while AI promises innovation, it’s also raising a ton of questions—especially for policymakers.
Last year, Congress introduced over 350 AI-related bills, and while many of these bills simply served to spark debate, it’s clear that lawmakers are considering everything from data privacy to mitigating bias in algorithms. Now, with a new Congress in session, the real question is: will AI regulation finally take center stage?
Although no property-focused AI bills made it to the floor last session, there’s been plenty of groundwork. From Senate hearings on AI in financial services to conversations about the technology’s impact on housing, policymakers have been laying the foundation for regulation. One theme keeps emerging: the need to balance innovation with fairness.
Technology often moves faster than regulation, but if regulators move too fast or ignore industry concerns, it could stall progress. And then there’s the wildcard: the Supreme Court’s 2024 repeal of the Chevron doctrine, which shifted interpretive power from agencies to the courts. This means Congress now has to be ultra-specific when drafting laws, which could further slow regulation initiatives.
What Can You Expect From the Housing Market in 2025?
The future of AI regulation may be uncertain, but one thing’s clear: the stakes couldn’t be higher. In this episode of Core Conversations, host Maiclaire Bolton Smith and Russell McIntyre, an expert in public policy and industry relations at CoreLogic, discuss the need for AI regulation and how the government could approach this task.
In This Episode:
1:50 – Is the U.S. government currently regulating AI? What about AI in the property industry?
5:19 – Whose responsibility is it to regulate AI?
7:19 – How will the Chevron doctrine going to influence AI regulation in Congress?
10:00 – What are the concerns and opportunities if AI regulations change in the property industry?
12:28 – How will AI affect climate science?
14:27 – Erika Stanley goes over the numbers in the property market with The Sip.
15:26 – Is it impossible to eliminate inherent bias in AI technology?
17:49 – How will the U.S. government handle AI and how can companies prepare for upcoming regulations around AI?
Up Next
What Are the Ethical Implications of AI in the Property Industry?
Russell McIntyre:
As far as future federal regulation, I think that we can make some educated guesses as far as what the Trump administration might want to do in terms of AI regulation, which as I mentioned essentially boils down to deregulation and innovation.
Maiclaire Bolton Smith:
Welcome back to Core Conversations: A CoreLogic Podcast where we tour the property market to investigate how economics, climate resiliency, governmental policies, and technology affect everyday life. I am your host Maiclaire Bolton Smith, and I’m just as curious as you are about everything that happens in our industry. AI, it’s just too little letters, but its impact is huge from household names like Chat, GPT to our own core AI approach. Using artificial intelligence in business is a shift that is here to stay. However, with almost any new technology after the first wave has come and gone, the next step is often regulation. While the US government has many different industries to review in the property industry, new legislation or retroactively removing current policies has the potential to cause upheaval. So to talk about how the government approaches regulations in new spaces and what regulating AI could look like for the property industry, we’ve welcomed back. Russell McIntyre, an expert in public policy and industry relations here at CoreLogic. Russell, welcome to Core Conversations.
RM:
Yeah, thank you so much for having me back here. I’m really excited to talk about AI today.
ES:
Before we get too far into this episode, I wanted to remind our listeners that we want to help you keep pace with the property market. To make it easy, we curate the latest insight and analysis for you on our social media where you can find us using the handle at CoreLogic on Facebook and LinkedIn or at CoreLogic Inc. On X and Instagram. But now let’s get back to Maiclaire and Russell.
MBS:
Yeah, this is going to be great. So, okay, AI is everywhere. Now. You can’t go anywhere without hearing something about AI in some way, shape, or form, and it often feels like it’s all we’re hearing about now. So let’s just start by specifically talking about the government. Are they or what are they going to do from a AI policy perspective?
RM:
Yeah, so given that we just had this major presidential election, we’re in this kind of small window where all of Capitol Hill’s attention is looking forward to the next Congress. That said, there were a lot of AI related bills introduced in this last Congress, about 350 of them.
MBS:
Oh, wow.
RM:
Yes, some of which could definitely come back once the new Congress has been sworn in, and there’s so many, they cover a range of industries where AI is starting to make an impact from healthcare to military activities to small businesses. It’s all over the board. And while most of these bills kind of died on the vine in this Congress, there are a number that receive a lot of discussion behind the scenes and even in some hearings, and they could be a focal point of the next Congress if they decide they really want to tackle this issue.
MBS:
Okay. Interesting. I guess when we look at the bills that have been on the floor related to the property industry, how have they fared? We are expecting some may come back just given the current state, but yeah, how have the ones that have been there, how have they fared so far? And I guess with the change in government, do we expect there to be any kind of different outcome potentially now?
RM:
Yeah, so there was a lot of discussion in this space in the previous Congress, and while no property focus AI bills actually made it to the floor of the House or Senate, there was a lot of conversation that took place mainly in the Senate Banking Committee, which held several hearings over the past two years on ai. And this included one in September, 2023 that was focused on AI and financial services. There was one in January, 2024 focused specifically on AI in the housing industry. And then there have been a number of times where federal regulators have come in for oversight hearings, and they’ve been questioned specifically on how they’re thinking about ai. Most of the legislation introduced thus far as kind of high level thinking about threats that AI can maybe pose to the financial services industry. So a couple of examples of that would be the Financial Artificial Intelligence Risk Reduction Act and the Artificial Intelligence Advancement Act, both of which would require our federal regulators to create a report on potential threats and make policy recommendations to Congress. However, there have been some more ambitious bills that have been introduced to that outlines more of a general framework for AI related innovation. An example would be the Artificial Intelligence Research Innovation and Accountability Act. And all of these specific bills I’ve mentioned were either introduced by a Republican or have an original co-sponsor, a Republican, and given the makeup of the Congress that we’re now in, this is key because Republicans are now driving the policy agenda. So even though the current Congress didn’t pass any laws on property specific AI issues, there were still a lot of fact gathering and groundwork at the banking committee that took place that puts them in a better position to act moving forward.
MBS:
It brings to mind actually AI is far reaching in everything that we do, in everything retouch. So whose responsibility is it actually to regulate ai? Is that actually the government’s responsibility, or where does that fall?
RM:
Yeah, it is, I think really so wide ranging that it’s going to take this whole of government approach to effectively regulate it. Every department, every agency is going to have to be involved. I don’t think there’s going to be one central office really like an AI office where they have all of these oversight responsibilities, especially like you said, given the unique ways in which AI can be used across industries because mortgage underwriters are going to be using AI differently than healthcare experts or air traffic controllers are. So while there may be some room for some administration wide priorities, I think most of these regulations are going to come from an individual agency level. And I kind of liken this to some of the things that the Biden administration did on climate resiliency efforts. They issued an executive order that had some general guidelines, but then they really turned it over to the agencies to tailor their regulations more specific to their industry. So I think we could see the similar thing play out with the Trump administration.
MBS:
Sure, that would make sense. I guess though, once you have individual agencies overseeing things for particular industries, do we anticipate there may be some conflicts between what may be coming from one side versus another side?
RM:
For sure. It absolutely could. Given all of these complexities associated with how we use AI and all the energy that goes into letting these AI data centers kind of run, I would expect a lot of agencies to take this a little bit slower, make sure that they’re not introducing policies that conflict with one another, and I think they’re going to want to have a lot of industry input on this one because they’re not going to want to just put out some blanket regulations and immediately get a ton of pushback from the industry. So I do think you’ll see a lot of back and forth between trade associations and individual companies over the next couple of years.
MBS:
Yeah, I guess hearing you say that it triggers a thought of something that not everybody may be familiar with, but the Chevron doctrine from last summer, it really did end up taking power away from federal agencies, and I guess now it moved to the courts to do their own ruling. I guess, do we think this example in particular may influence how AI regulations may come to be?
ES:
For a little context, the Chevron doctrine was a law that required courts to defer to an agency’s interpretation of a statute as long as it was reasonable, this law was struck down by the Supreme Court in 2024. This means that courts will rely on their own interpretations for ambiguous laws, learn more about laws in limbo that could affect the property market. And the core conversations episode that aired on April 17, 2024, the link is in the show notes.
RM:
Yeah, I do think so, and I’m glad you brought this back up. I believe we touched on this the first time I came on the podcast, so we’ll have to make this a tradition, but I think, so the Chevron doctrine essentially is going to force Congress to be much more specific in their legislation because it limits the leeway that administrative departments have, giving them a little bit less wiggle room when they’re operationalizing any new laws. So the onus is back on Congress to be more specific. And I think, unfortunately, as you might have noticed, there aren’t a lot of artificial intelligence experts in Congress even when it comes to staff members. And I know that they’re trying to get better at adding that expertise. But I do think that the overturning of the Chevron doctrine makes this regulatory process a little bit more difficult, not impossible by any means, but definitely a little more difficult for agencies,
MBS:
I guess. I mean, it leads me to think AI solves a lot of problems for us. Do we foresee AI actually making these AI regulations automatic? Do we think that AI is going to help generate AI regulation?
RM:
That’s a really fun question. Imagining some Capitol Hill staffers using chat GPT to help write AI regulations and legislation. I think we’re still a ways off from that, but I definitely think there are ways in which AI could be used in regulatory oversight requirements. It could help regulators kind of dredge through all these large amounts of data to identify potential issues. But we’ve got to establish a system of checks and balances on the usage of AI I think, before we get to that point where we’re trusting it to help regulate itself.
MBS:
We’ve talked about AI a lot on this podcast, and I think back to specific podcasts that we’ve done about AI specifically related to the property industry. And I guess from your perspective, Russell, are there any concerns or opportunities specifically in our industry on how AI might drive things?
RM:
Yeah, I think there’s lots of both. I can focus on the opportunities first, which actually was the topic of a really cool AI focused panel at the Mortgage Bankers Association annual convention back in October. Definitely recommend checking that out. But I think the opportunities at its root, the benefits of AI really revolve around the ability to ingest and analyze these large amounts of data, which can be a game changer for mortgage originators and underwriters who on a daily basis have to process a ton of loan related documentation. So this can help them identify where errors might have occurred or areas where data might be missing or incomplete. And that’s where the real opportunities lie. Making those existing operations much more efficient, bringing down your timelines, reducing your internal cost. And when it comes to regulating these opportunities, I think it’s pretty reasonable to assume that the Trump administration is going to be much more hands-off than a Harris administration would’ve been. I think there will be much less of a focus on uncovering and regulating against potential biases that AI systems may develop and more of an open sandbox kind of atmosphere where AI systems can operate really without much fear of regulatory oversight based on demographic issues or things like that. But that also leads me right into the concerns you mentioned in your question. I think there’s definitely concern within the industry as to how AI has the potential to exacerbate that already exist. And this is basically the same conversation we’ve been having around black box algorithms for years now, how to make sure that the data feeding into that AI doesn’t include any underlying biases, whether those be around race, ethnicity, gender, et cetera. How to make sure your AI model doesn’t exacerbate any of those biases if they’re present.
ES:
Learn more about how AI models can and should be constructed to prevent biases. Listen to the episode of Core conversations that aired on May 29, 2024. The link is in the show notes.
MBS:
Sure. No, understandable. I guess the other thing that comes to mind is hot topic is climate science, and I can definitely see there being the need for more policy around how AI is used specifically related to climate science. And I guess what kind of comes to mind in terms of what potential implications that may have with something that’s really a hot topic for especially this administration.
RM:
So I’m going to be a little more optimistic here. I think any initial AI kind of policy changes or regulations are going to have a more limited implications for the climate science community. It’s not to say there won’t be any down the line. Don’t want to get out over my skis here, but for now, I do think that some of the initial regulations on AI are going to prioritize other issues,
And I can give you some examples. First would be the demographic bias issues that we just talked about. Second, I think there’s going to be a lot of discussion around data privacy issues. So that’s the usage of personally identifiable information and making sure consumers can keep their personal data safe. Data breaches have become so common that pretty much every American is aware of the data privacy issue by now, and I think AI can exacerbate some of those. And then third, I think copyright and licensing issues will actually be the subject of a lot of initial regulations. So this would be related to both the data inputs that are feeding the AI model and the outputs of the model itself. So I think we’ve all seen numerous copyright lawsuits over the years where you have these large companies with very popular intellectual property, think of like a Disney or a Nintendo or something, and they’re not afraid to go to court if they feel like their IP is being used in a way that they haven’t condoned. And I think ai, AI might allow people to exacerbate those issues themselves.
ES:
It’s that time again, grab a cup of coffee or your favorite beverage. We’re going to do the numbers in the housing market. Here’s what you need to know. Mom and pop investors are quietly shaping the housing market. While headlines often focus on institutional investors, they only make up less than 2% of all investor purchases. Most real estate investors are actually small time landlords. Buyers who own between three and 10 properties make 16 to 25% of all investor purchases, but regardless of their size, investment is slowing across the board. Investors made 21,000 fewer purchases in Q3 of this year than in Q3 of last year. The declines are even steeper. If you look back further into 2021 and 2022, still investors are buying Dallas, Houston, Atlanta, Los Angeles, and Phoenix are the top five cities for investors within each of these cities. It’s the small investors that are buying the bulk of the properties. And that’s the sip. See you next time.
MBS:
Jumping back to when you were talking about the mortgage industry, Russell, and talking about in particular the worry of inherent bias. This has been long talked about in technology, especially when it comes to mortgages. Even the National Fair Housing Alliance has done a study recently that talks about there’s a possibility to use AI to make mortgage writing more equitable and potentially eliminate some of this inherent bias that may be there in the industry. So can you just talk a little bit about this and what exactly would that look like, and are there going to be specific algorithms to work on pricing and how do you foresee this coming to be? I think it’s a really interesting topic.
RM:
It really is. We have all these concerns about AI potentially exacerbating biases, but if you flip the coin, you can also see find ways in which AI might be able to solve some of these issues.
But I’m really glad you brought up that study because a shameless plug, our team here at CoreLogic was able to provide the National Fair Housing Alliance with the data that they needed to validate the underwriting and pricing model that they developed in that study, which was such a cool study to read. But yeah, I think that there are ways where we can flip this on its head and find if we can train AI to not only identify instances of underlying bias, but if we can also teach it to find innovative ways to overcome those biases, we can actually create a much more equitable housing finance system. And that’s what the National Fair Housing Alliance was doing with that study, and I think gives us a lot of optimism for ways that we can overcome these issues.
MBS:
Yeah, I mean, it’s very optimistic and it sounds like it’s something that would be so advantageous, but I have to go back to, do we foresee regulations coming in that could either help or hinder that in kind of moving into that direction in the future?
RM:
Yeah, I think that the regulations initially are going to be light, but I do think that by providing more of a sandbox for people to play in this AI space, which is what I think this administration’s wanting to do, I think that they are going to have a focus on innovation, on finding new ways to use AI to solve current problems. So I am still optimistic on that as well.
MBS:
Yeah. Okay. Okay, Russell, get out your crystal ball. We’re going to look into the future, and I guess not just in terms of what do we think is going to happen, but more from what can companies do to prepare for any regulations that may come up in and around ai? What can people be thinking of to start to be prepared?
RM:
And I think some of this is kind of basic level stuff. Companies need to make sure they’re educating themselves on artificial intelligence, and that goes from the executive level on down. It’s not just your C-suite that needs to understand these issues, it’s everyone at the company.
I think companies should also start establishing their own internal best practices and setting up some guardrails internally to create a framework for how they develop and use ai, and know that that framework will continue to evolve over time as rules and regulations do get developed, as our understanding of AI grows and as our capabilities just get better. And then lastly, we mentioned that some of these AI regulations are probably going to be industry specific, and that the Trump administration is probably going to be reaching out to the industry to get their thoughts on these issues before regulating on the topic. So I’d suggest that companies start engaging with their industry peers and their trade associations on this issue because there could be some unique issues within the usage of AI that might affect some industries more than others. And just having that information sharing is going to really help us come up with more effective and better targeted best practices and standards, which will need to happen
MBS:
And potentially drive the regulation that needs to be.
RM:
Exactly. Yeah. The Trump administration is going to be relying on industry for input on a lot of these regulations, not just AI specific, but AI included. So I do think that companies that are able to develop a stance on AI and to show they understand it will be better suited to join those conversations.
MBS:
I guess when we look to the future, if you pull out that crystal ball, I think back, I mean, honestly, if I look back to even the eighties and where we thought we would be by 2025, I think we thought we’d all be wandering around on hoverboards, but AI was always something that seemed very fictitious and very in the future. And then really in the last decade, there’s been this proliferation of ai and it’s everywhere and it’s just exploded. Where do you think the government in particular is going to go with AI? Are we going to get to a stage where we’re not going to have a human being be a president and we’re just going to have an AI president running the country? And then what regulation do we think, where do we think regulations will go as you look towards the future?
RM:
Oh, yeah. I don’t know if we’ll ever have that, although it does sound like a really great Michael Bay movie that I’d love to watch, or like a John Grisham book or something like that, just AI president. But as far as future federal regulation, I think that we can make some educated guesses as far as what the Trump administration might want to do in terms of AI regulation, which as I mentioned essentially boils down to deregulation and innovation. Although over the summer, the GOP updated their policy platform to include a repeal of the Biden Administration’s executive order on AI from October, I believe, of 2023, which essentially just falls under the deregulation category. That executive order didn’t have too much teeth to it, but it did provide some outlines and general guidance for how agencies could be thinking about the usage of ai. But while the Trump administration is doing this deregulation and innovation, we could see them also increasing their investments in the usage of AI across the government, I think notably in our military and defense operations. I think that’s an area where the Trump administration will spend a lot of their time in general
Focusing on ways to innovate our military. And I think that AI could be something that they could use there, but honestly, that’s about as much as I’m willing to speculate. We saw during the first Trump administration, sometimes these priorities can change kind of quickly. So I think it’s best to take a lot of these forecasts with a grain of salt. But I’m definitely optimistic about the way that our federal government will be thinking about AI in the future.
MBS:
Yeah, I agree with you. I think it’s inevitable that AI is going to be a part of everything that comes down and that change is upon us. So Russell, thank you so much for joining us again and being back on core conversations, and you will definitely be back again. Thanks for joining me today on Core Conversations: A CoreLogic Podcast.
RM:
Thanks Maiclaire.
MBS:
Alright, and thank you for listening. I hope you’ve enjoyed our latest episode. Please remember to leave us a review and let us know your thoughts and subscribe wherever you get your podcast to be notified when new episodes are released. And thanks to the team for helping bring this podcast to life producer Jessi Devenyns, editor and sound engineer Romie Aromin, our facts guru, Erika Stanley and social media duo, Sarah Buck and Makaila Brooks. Tune in next time for another Core Conversation.
ES:
You still there? Well, thanks for sticking around. Are you curious to know a little bit more about our guest today? Well, Russell McIntyre is an expert in public policy and industry relations for CoreLogic. He is responsible for researching government and industry issues of importance to the organization and its clients. He also coordinates the CoreLogic PAC activities and assists leadership with appointments, events and projects with partner organizations.
©2025 CoreLogic, Inc. All rights reserved. The CoreLogic content and information in this blog post may not be reproduced or used in any form without express accreditation to CoreLogic as the source of the content. While all of the content and information in this blog post is believed to be accurate, the content and information is provided “as is” with no guarantee, representation, or warranty, express or implied, of any kind including but not limited to as to the merchantability, non-infringement of intellectual property rights, completeness, accuracy, applicability, or fitness, in connection with the content or information or the products referenced and assumes no responsibility or liability whatsoever for the content or information or the products referenced or any reliance thereon. CoreLogic® and the CoreLogic logo are the trademarks of CoreLogic, Inc. or its affiliates or subsidiaries. Other trade names or trademarks referenced are the property of their respective owners
Written by: Maiclaire Bolton Smith