Training Computers to See the Built Environment Related to Physical Activity: Detection of Microscale Walkability Features Using Computer Vision
Marc A. Adams,
Christine B. Phillips,
Akshar Patel and
Ariane Middel
Additional contact information
Marc A. Adams: College of Health Solutions, Arizona State University, Phoenix, AZ 85004, USA
Christine B. Phillips: Department of Psychology, Clemson University, Clemson, SC 29634, USA
Akshar Patel: College of Health Solutions, Arizona State University, Phoenix, AZ 85004, USA
Ariane Middel: Herberger Institute for Design and the Arts, School of Arts, Media and Engineering, Arizona State University, Phoenix, AZ 85004, USA
IJERPH, 2022, vol. 19, issue 8, 1-16
Abstract:
The study purpose was to train and validate a deep learning approach to detect microscale streetscape features related to pedestrian physical activity. This work innovates by combining computer vision techniques with Google Street View (GSV) images to overcome impediments to conducting audits (e.g., time, safety, and expert labor cost). The EfficientNETB5 architecture was used to build deep learning models for eight microscale features guided by the Microscale Audit of Pedestrian Streetscapes Mini tool: sidewalks, sidewalk buffers, curb cuts, zebra and line crosswalks, walk signals, bike symbols, and streetlights. We used a train–correct loop, whereby images were trained on a training dataset, evaluated using a separate validation dataset, and trained further until acceptable performance metrics were achieved. Further, we used trained models to audit participant ( N = 512) neighborhoods in the WalkIT Arizona trial. Correlations were explored between microscale features and GIS-measured and participant-reported neighborhood macroscale walkability. Classifier precision, recall, and overall accuracy were all over >84%. Total microscale was associated with overall macroscale walkability ( r = 0.30, p < 0.001). Positive associations were found between model-detected and self-reported sidewalks ( r = 0.41, p < 0.001) and sidewalk buffers ( r = 0.26, p < 0.001). The computer vision model results suggest an alternative to trained human raters, allowing for audits of hundreds or thousands of neighborhoods for population surveillance or hypothesis testing.
Keywords: built environment; computer vision; deep learning; Google Street View; microscale; walkability (search for similar items in EconPapers)
JEL-codes: I I1 I3 Q Q5 (search for similar items in EconPapers)
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
https://www.mdpi.com/1660-4601/19/8/4548/pdf (application/pdf)
https://www.mdpi.com/1660-4601/19/8/4548/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jijerp:v:19:y:2022:i:8:p:4548-:d:790389
Access Statistics for this article
IJERPH is currently edited by Ms. Jenna Liu
More articles in IJERPH from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().