GitHub repository
The full source for the project — data preparation, model training, evaluation pipelines, and segment-level aggregation — is hosted on GitHub.
Trained model weights and the manual real-world segment images live as release assets on the same repo — see the v1.0 release.
Direct downloads
Code, datasets, trained weights, and the manual real-world segment images. Source and weights are hosted on the project's GitHub repo and v1.0 release; the two Mapillary datasets must be downloaded from Mapillary directly.
Source ZIP
Snapshot of the repository at the time of submission. Easiest way to grab everything in one go.
Download source (.zip)Mapillary Vistas
The Mapillary Vistas dataset is a street-level imagery dataset used in the Parking Meter and Curb detection experiments.
Download DataTrained sign detector
YOLOv8m parking-sign detector weights (50-epoch checkpoint, mAP@50 = 0.5487). Useful as a drop-in starting point.
Download weights (.pt)Curb segmentation model
U-Net curb segmentation weights (best-Dice checkpoint, epoch 17, val Dice = 0.5184).
Download weights (.pt)Manual segment images
The 30 manually collected real-world images (six segments × five views) used in the qualitative aggregation experiments.
Download images (.zip)What's in the repository
High-level layout of QUASARS06/street-parking-presence-inference. Each top-level folder is a self-contained sub-project with its own scripts and outputs.
Top-level layout
street-parking-presence-inference/
├── mtsd_project/ # Parking-sign detection on MTSD
│ ├── scripts/ # Data prep, training, evaluation, viz
│ ├── dataset/ # YOLO-format binary parking-sign dataset (5-image samples)
│ ├── data/ # Processed images/labels (samples)
│ ├── outputs/ # Eval CSVs, threshold sweeps, category samples
│ ├── sample_images/ # Manually collected qualitative images
│ └── parking_signs.yaml # YOLO data config
│
├── Mapillary_Vistas_Dataset/ # Curb segmentation + zero-shot meter eval
│ ├── scripts/ # Curb U-Net training, meter zero-shot eval
│ ├── curb_segmentation/ # Mask metadata + samples
│ ├── training/, validation/, testing/ # 5-image dataset samples
│ ├── parking_meter_imgs/ # Vistas parking-meter samples
│ └── outputs/ # Meter eval CSVs
│
├── manual_segments_dataset/ # 30-image real-world qualitative dataset
│ ├── segments/ # 6 segments × 5 views (raw images)
│ ├── outputs/ # Visualization outputs + scores CSVs
│ └── segments.csv # Index
│
└── parking_aggregation_project/ # Segment-level aggregation system
├── scripts/ # Synthetic-segment builder, aggregation, eval
├── metadata/ # Per-image cue scores
└── outputs/synthetic_segments/ # Synthetic benchmark + per-threshold results
Reproducing the main results
- Download MTSD and Mapillary Vistas v2.0 from their official sources. The repo only ships 5-image samples for each split.
-
Build the binary parking-sign YOLO dataset and run training /
evaluation from
mtsd_project/scripts/. -
Train the curb segmentation U-Net and run the zero-shot
parking-meter evaluation from
Mapillary_Vistas_Dataset/scripts/. -
Build the synthetic pseudo-segment benchmark and run the
segment-level aggregation evaluation from
parking_aggregation_project/scripts/. -
Reproduce the annotated qualitative figures by running the
visualization scripts against the
manual_segments_dataset/bundle (also available assegments.zipfrom the v1.0 release).
Environment
The experiments were primarily run on Kaggle (Tesla T4, P100) and a local Apple Silicon machine. Approximate environment:
- Python 3.10+
- PyTorch 2.x
- Ultralytics YOLO (
ultralyticspackage) segmentation-models-pytorchfor the curb U-Net-
opencv-python,numpy,scikit-learn,matplotlib
Exact versions and the recommended environment are in the repo's
requirements.txt / environment.yml.
If you have trouble reproducing a result or running a script, please open an issue on the GitHub repository and we'll take a look.