banner

News

Jul 01, 2023

Turning Old Maps into 3D Digital Models of Lost Neighborhoods

Andrew Corselli

Imagine strapping on a VR headset and strolling through the long-gone neighborhood in which you grew up. That is now a very real possibility, as researchers have developed a method to create 3D digital models of historic neighborhoods using machine learning and historic Sanborn Fire Insurance maps.

“The story here is we now have the ability to unlock the wealth of data that is embedded in these Sanborn fire atlases that were created for about 12,000 cities and towns in the United States,” said Harvey Miller, study co-author and geography professor at Ohio State University. “It enables a whole new approach to urban historical research that we could never have imagined before machine learning. It is a game changer.”

Study co-author Yue Lin, geography doctoral student at OSU, developed machine learning tools that can extract details about individual buildings from the maps, including their locations and footprints, number of floors, construction materials, and primary use.

The researchers tested their machine learning technique on two adjacent neighborhoods on the near east side of Columbus, OH, that were largely destroyed in the 1960s to make way for the construction of I-70. Machine learning techniques were able to extract the data from the maps and create digital models.

Comparing data from the Sanborn maps to today showed that a total of 380 buildings were demolished in the two neighborhoods for the highway — including 286 houses, 86 garages, five apartments, and three stores. Analysis of the results showed that the machine learning model was about 90 percent accurate in recreating the information contained in the maps.

“We want to get to the point in this project where we can give people virtual reality headsets and let them walk down the street as it was in 1960 or 1940 or perhaps even 1881,” Miller said.

Here is an exclusive Tech Briefs interview — edited for clarity and length — with Miller and Lin.

Tech Briefs: Can you explain in simple terms how the technology works?

Miller: What we do is apply algorithms to data. In this case, we use something called support vector machines for part of it and also Mask R-CNN. The way it generally works is that we hand label the correct answers on the maps and then feed it to the machine learning algorithm for it to learn by trial and error, by positive and negative feedback. When it eventually learns how to detect the information, we can apply it to the data, and then we can apply it to the rest of the maps.

Lin: On sample maps we have multiple types of information. The first type of information is the building outline. If you look at the sample maps, each building has its own outlines and its own shapes, and it also has its own colors. The colors represent the materials of the buildings. We trend a support vector machine’s model to classify each pixel based on the colors so we can distinguish between the backgrounds and the buildings because they have distinct colors. That’s how we detect the building outlines and shapes to create a visualization.

There are other types of information, like the building utilizations; for example, whether it is a store or a residential building. And on sample maps, we’ll also be able to know the storage of each building because they’re all labeled on the buildings.

In this regard, we trend a detection model called Mask R-CNN. Combining these different pieces of information, is how we create the 3D visualization — based on those historical maps and using machine learning.

Tech Briefs: What were the biggest technical challenges you faced throughout your work?

Lin: The georeferencing part. We collected the sample maps from the Library of Congress, and, for those maps, although they’re digitalized, scanned, and colorized, they’re not georeferenced. So, it took us some time to figure out how to automatically georeference those maps — that was a really big challenge; they’re historical maps, and oftentimes we were not able to find a lot of control points to do the georeferencing.

Miller: The control points are current street intersections that exist on the Sanborn map. If the current street intersection exists, we know its location. If we could find it and match it to the Sanborn map, the same intersection, that’s how we do the georeferencing.

Lin: Oftentimes you’ll find some streets and intersections on the historical maps that do not exist today. So, for some neighborhoods that we’re working on it is very difficult to match the maps to the earth’s locations. So that’s one of the biggest challenges we encountered in this process.

We are currently trying to figure out how to make this process more automatic. There are some excellent resources out there. For example, there are researchers developing some collaborative platforms that provide a dual-reference map for our applications, and there are also some machine learning techniques that allow us to do this automatically. So, that can be some of the future research directions for our projects.

Another big challenge is that we’re now working on another neighborhood’s data, and on their sample maps, there are lots of industrial buildings. For industrial buildings there is lots of descriptive text on the buildings. We're trying to develop some methods to extract the texts from the buildings. But that is a bit challenging, , because those texts are not standardized and there are lots of abbreviations involved.So, to develop a text-detection model is kind of challenging, but we’re working on that as well.

Tech Briefs: Harvey, you’re quoted as saying, ‘We want to get to the point in this project where we can give people virtual reality headsets and let them walk down the street as it was in 1960 or 1940 or perhaps even 1881.’ How far off from that do you think we are?

Miller: I would say a couple years, maybe a year. That’s what we’re working on right now. We have a new phase of the project where we’re trying to reconstruct a commercial street of black-owned businesses in Columbus from the early 1950s. We want to develop realistic building textures. What the building textures mean is what the outside of the buildings looked like. Right now, we have developed a set of rules based on the architectural style, the neighborhood, and we create buildings that kind of represent what the buildings would look like. We want to make it much more realistic to the eye.

There are a couple things we’re working on right now to do that. One way is that we’re using a rule-based system where we’re studying architectural drawings and coming up with rules for how the facades look on these buildings.

The other technique is that we’re using aerial imagery and street view imagery where it exists either for historical street view imagery or maybe Google Street view as some buildings still exist and taking the imagery and checking that onto the building.

A Faster Way to Design Rockets: Scientific Machine Learning

Executive Forum: Machine Learning & AI

Machine Intelligence to Build Soft Machines

And then there’s a hybrid approach where in some buildings we may use imagery like from air photographs or street view imagery and make the roof directly from the image, but then construct the side of the building using rules. It’s going to be an almost building-by-building approach just based on what information we know about the building. If we have good imagery, we can just do it by projecting the image. If not, we have to come up with rules or some kind of simulation of it.

The other thing we’re working on to make it realistic is working with our civil engineering program, and taking aerial imagery — some of it dates back to the ‘50s that was used for mapping purposes — and we’re using that to create realistic 3D landscapes like grass, trees, bushes, and streets. And that’ll also be projected into the image. We technically know how to do this.

The big challenge we’re facing is coming up with a good workflow for doing the archival research and finding out where this data exists. There are many different areas that exist, besides the aerial imagery; even back then, real estate agents used photographs to assess buildings, or if they were selling the building, they would take a photograph and put it into a card file. Some of that’s been digitized. We’ve been accessing that to look for old photos of the buildings and old photos of the neighborhood. But I’m confident we can do it; it’s just a matter of developing the best possible workflow and one that can be generalized to other cities.

Tech Briefs: Is there anything else either of you two would like to add?

Miller: What we’ve done here is a real breakthrough, to capture standard buildings and commercial buildings and residential buildings from these Sanborn atlases. One of our big challenges is going for non-standard buildings, like industrial buildings that have textual descriptions. That’s going to be much more challenging.

The other thing we’re working on is extracting the street address data from the map so we can get the street address for every building and then link that to things like historic phone books and city directories and other records. That’s a little challenging because you have to find the spatial relationship between the street name, the street address, and the buildings. And there are a lot of cases, like one we looked at this morning that was a public housing project with one street address for many, many buildings and the data is sealed.

There are challenges like that; but we have a good team and we’re figuring it out and we’ll be moving forward. One of our main reasons for doing this is it’s motivated by some of the damage that was done by redlining, urban highway construction, and urban renewal in some neighborhoods in Columbus — and really in many American cities.

We want to give people a sense of what used to exist before these things happened to these neighborhoods, to not only appreciate what was deliberately done to these neighborhoods, but also to sort of think about ways moving forward to try to reduce or mitigate some of the damage that was done by these bad practices in the late 20th century.

Tech BriefsMillerLinTech BriefsLinMillerLinTech BriefsMillerTech BriefsMillerTopics:
SHARE