Measuring Human Perceptions of a Large-Scale Urban Region Using Machine Learning

Abstract

Measuring the human sense of place and quantifying the connections among the visual features of the built environment that impact the human sense of place have long been of interest to a wide variety of fields. Previous studies have relied on low-throughput surveys and limited data sources, which have difficulty in measuring the human perception of a large-scale urban region at flexible spatial resolutions. In this work, a data-driven machine learning approach is proposed to measure how people perceive a place in a large-scale urban region. Specifically, a deep learning model, which has been trained on millions of human ratings of street-level imagery, was used to predict human perceptions of a street view image. The model achieved a high accuracy rate in predicting six human perceptual indicators, namely, safe, lively, beautiful, wealthy, depressing, and boring. This model can help to map the distribution of the city-wide human perception for a new urban region. Furthermore, a series of statistical analyses was conducted to determine the visual elements that may cause a place to be perceived as different perceptions. From the 150 object categories segmented from the street view images, various objects were identified as being positively or negatively correlated with each of the six perceptual indicators. The results take researchers and urban planners one step toward understanding the interactions of the place sentiments and semantics.

Publication
Landscape and Urban Planning
Fan Zhang
Fan Zhang
Assistant professor
Yu Liu
Yu Liu
Professor
1997 - present

Professor of GIScience

Related