Measuring Remote Work Using a Large Language Model (LLM)


Peter John Lambert

The Covid-19 pandemic propelled an enormous uptake in hybrid and fully remote work. Over time, it has become clear that this shift will endure long after the initial forcing event. There are few modern precedents for such an abrupt, large-scale shift in working arrangements. This article analyzes the full text of hundreds of millions of job postings in five English-speaking countries. In doing so, it applies a state-of-the-art Large Language Model (LLM) to analyze the text and determine whether the job allows remote/hybrid work.

Key Messages

  • Large Language Models (LLMs) can dramatically improve upon traditional text-based measurement tools used by economists
  • We fit, test and train the “Work-from-Home Algorithmic Measure” (WHAM) model to detect new online job postings offering remote/hybrid arrangements. The WHAM model has near-human accuracy. We deploy this model at scale, processing hundreds of millions of job ads collected across five countries and thousands of cities
  • The share of new ads offering remote/hybrid jobs increased four-fold in the US and more than five-fold in the UK, Australia, Canada, and New Zealand, between 2019 and 2023. These data and more are available for researchers at
  • The “remote work gap” across cities, occupations, and high/low salary workers continues to widen, and the hare of advertised remote/hybrid work is highly skewed towards white-collar workers and cities which are hubs for government, business, technology, and higher education
  • LLMs offer massive potential for empirical research using text data, but one should adhere to best practices and understand the “do’s and don’ts” of these technologies. Generative AI offers immense promise, with some significant limitations

Peter John Lambert: “Measuring Remote Work Using a Large Language Model (LLM),” EconPol Forum 24 (3), CESifo, Munich, 2023.