What if cities no longer just saw, but finally understood what they were looking at?
That's the promise of computer vision applied to smart cities: to move from a monitored space to a legible territory.
Not to "control", but to live together better.
Because public space is under pressure. Cities are becoming crossroads for multiple interests: residents, tourists, workers, logistics flows, soft mobility, events, construction sites, shared mobility... and each has its own, often conflicting, logic of use.
The result: constant tensions, difficult to anticipate using conventional observation methods. It's no longer a question of human resources, but of the ability to understand what's going on, in real time and with finesse.
This is where CORE changes everything.
CORE doesn't capture an image. It captures usage.
One of CORE's major contributions is its ability to give context to urban space. It's not a video surveillance system. It's an interpreter. It doesn't see a car, it understands if it's blocking fire access. It doesn't see a pedestrian, it understands if he's waiting, crossing, or if a queue is forming.
This semantic difference changes everything.
Cameras are already there. But without visual intelligence, they're only useful after the fact, as part of investigations or incident reviews. CORE, on the other hand, enables proactive reading: it identifies weak signals, triggers intelligent alerts and measures the effects of municipal action.
For example, after a street was pedestrianized, was there really a drop in motorized traffic? Did traffic simply shift elsewhere? What new tensions has this generated?
With CORE, the city can rely on dynamic behavioral data, rather than one-off surveys.
Serving residents: limiting nuisance and invisible friction
Many incivilities in the city are not spectacular. They are small, everyday offences that residents experience as a silent degradation of their living environment:
-
Repeated parking on sidewalks.
-
Delivery trucks obstructing bike lanes.
-
Queues at fast-food outlets or government offices spilling over into public space.
-
Play areas occupied by scooters.
-
Illegitimate occupation of space at night.
These micro-phenomena are difficult to observe continuously, as they often only exist for a few minutes, several times a day. And yet, they take their toll on the daily lives of local residents.
CORE makes it possible to objectify these realities. Give the right warning at the right time. To produce precise statistics, zone by zone, without saturating supervision teams.
What's next? Use cases still under-exploited
💡 Analyze the uses of street furniture: benches, bicycle racks, bus shelters... Are they being used? Who uses them? At what times? This can redirect investments, adapting furniture to actual (not presumed) uses.
💡 Detecting the city's "dead zones": places that are little frequented, anxiety-provoking, poorly lit. By cross-referencing pedestrian flows with timetables, CORE can help reactivate these spaces, better maintain them, or redesign lighting.
💡 Reactivity to ephemeral nuisances: urban rodeos, night-time noise, evening overflow, CORE can serve as a behavioral radar, coupled with remotely controlled sound or light alerts.
Intelligence for the right balance
The intelligent city will not be one that monitors everything. It will be one that understands well enough to act with nuance.
CORE does not replace human decision-making: it enhances it. It gives every community the means to base its actions on facts, not feelings. And to do so without compromising ethics: no facial recognition, no commercial use of images, no unnecessary storage.
Simply: an enhanced reading of public space, for more targeted interventions, fairer developments, and a more liveable city for all.
Smart city, yes. But above all, an understanding city.