mirror of
https://github.com/blakeblackshear/frigate.git
synced 2026-04-03 06:40:22 +00:00
Compare commits
4 Commits
bda7fcc784
...
7f3f62e46d
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
7f3f62e46d | ||
|
|
2920127ada | ||
|
|
2c1ded37a1 | ||
|
|
17912b4695 |
@ -9,15 +9,9 @@ Face recognition identifies known individuals by matching detected faces with pr
|
||||
|
||||
### Face Detection
|
||||
|
||||
Users running a Frigate+ model (or any custom model that natively detects faces) should ensure that `face` is added to the [list of objects to track](../plus/#available-label-types) either globally or for a specific camera. This will allow face detection to run at the same time as object detection and be more efficient.
|
||||
When running a Frigate+ model (or any custom model that natively detects faces) should ensure that `face` is added to the [list of objects to track](../plus/#available-label-types) either globally or for a specific camera. This will allow face detection to run at the same time as object detection and be more efficient.
|
||||
|
||||
Users without a model that detects faces can still run face recognition. Frigate uses a lightweight DNN face detection model that runs on the CPU. In this case, you should _not_ define `face` in your list of objects to track.
|
||||
|
||||
:::note
|
||||
|
||||
Frigate needs to first detect a `face` before it can recognize a face.
|
||||
|
||||
:::
|
||||
When running a default COCO model or another model that does not include `face` as a detectable label, face detection will run via CV2 using a lightweight DNN model that runs on the CPU. In this case, you should _not_ define `face` in your list of objects to track.
|
||||
|
||||
### Face Recognition
|
||||
|
||||
@ -26,7 +20,7 @@ Frigate has support for two face recognition model types:
|
||||
- **small**: Frigate will run a FaceNet embedding model to recognize faces, which runs locally on the CPU. This model is optimized for efficiency and is not as accurate.
|
||||
- **large**: Frigate will run a large ArcFace embedding model that is optimized for accuracy. It is only recommended to be run when an integrated or dedicated GPU is available.
|
||||
|
||||
In both cases a lightweight face landmark detection model is also used to align faces before running the recognition model.
|
||||
In both cases, a lightweight face landmark detection model is also used to align faces before running recognition.
|
||||
|
||||
## Minimum System Requirements
|
||||
|
||||
@ -88,9 +82,9 @@ When choosing images to include in the face training set it is recommended to al
|
||||
- If it is difficult to make out details in a persons face it will not be helpful in training.
|
||||
- Avoid images with extreme under/over-exposure.
|
||||
- Avoid blurry / pixelated images.
|
||||
- Avoid training on infrared (grayscale). The models are trained on color images and will be able to extract features from grayscale images.
|
||||
- Avoid training on infrared (gray-scale). The models are trained on color images and will be able to extract features from gray-scale images.
|
||||
- Using images of people wearing hats / sunglasses may confuse the model.
|
||||
- Do not upload too many similar images at the same time, it is recommended to train no more than 4-6 similar images for each person to avoid overfitting.
|
||||
- Do not upload too many similar images at the same time, it is recommended to train no more than 4-6 similar images for each person to avoid over-fitting.
|
||||
|
||||
:::
|
||||
|
||||
@ -100,7 +94,7 @@ When first enabling face recognition it is important to build a foundation of st
|
||||
|
||||
Then it is recommended to use the `Face Library` tab in Frigate to select and train images for each person as they are detected. When building a strong foundation it is strongly recommended to only train on images that are straight-on. Ignore images from cameras that recognize faces from an angle.
|
||||
|
||||
Aim to strike a balance between the quality of images while also having a range of conditions (day / night, different weather conditions, different times of day, etc.) in order to have diversity in the images used for each person and not have overfitting.
|
||||
Aim to strike a balance between the quality of images while also having a range of conditions (day / night, different weather conditions, different times of day, etc.) in order to have diversity in the images used for each person and not have over-fitting.
|
||||
|
||||
Once a person starts to be consistently recognized correctly on images that are straight-on, it is time to move on to the next step.
|
||||
|
||||
@ -112,11 +106,11 @@ Once straight-on images are performing well, start choosing slightly off-angle i
|
||||
|
||||
### Why can't I bulk upload photos?
|
||||
|
||||
It is important to methodically add photos to the library, bulk importing photos (especially from a general photo library) will lead to overfitting in that particular scenario and hurt recognition performance.
|
||||
It is important to methodically add photos to the library, bulk importing photos (especially from a general photo library) will lead to over-fitting in that particular scenario and hurt recognition performance.
|
||||
|
||||
### Why do unknown people score similarly to known people?
|
||||
|
||||
This can happen for a few different reasons, but this is usually an indicator that the training set needs to be improved. This is often related to overfitting:
|
||||
This can happen for a few different reasons, but this is usually an indicator that the training set needs to be improved. This is often related to over-fitting:
|
||||
|
||||
- If you train with only a few images per person, especially if those images are very similar, the recognition model becomes overly specialized to those specific images.
|
||||
- When you provide images with different poses, lighting, and expressions, the algorithm extracts features that are consistent across those variations.
|
||||
@ -124,4 +118,4 @@ This can happen for a few different reasons, but this is usually an indicator th
|
||||
|
||||
### I see scores above the threshold in the train tab, but a sub label wasn't assigned?
|
||||
|
||||
The Frigate considers the recognition scores across all recogntion attempts for each person object. The scores are continually weighted based on the area of the face, and a sub label will only be assigned to person if a person is confidently recognized consistently. This avoids cases where a single high confidence recognition would throw off the results.
|
||||
The Frigate considers the recognition scores across all recognition attempts for each person object. The scores are continually weighted based on the area of the face, and a sub label will only be assigned to person if a person is confidently recognized consistently. This avoids cases where a single high confidence recognition would throw off the results.
|
||||
|
||||
@ -62,7 +62,7 @@ Fine-tune the LPR feature using these optional parameters:
|
||||
|
||||
- **`detection_threshold`**: License plate object detection confidence score required before recognition runs.
|
||||
- Default: `0.7`
|
||||
- Note: This is field only applies to the standalone license plate detection model, `min_score` should be used to filter for models that have license plate detection built in.
|
||||
- Note: This is field only applies to the standalone license plate detection model, `threshold` and `min_score` object filters should be used for models like Frigate+ that have license plate detection built in.
|
||||
- **`min_area`**: Defines the minimum area (in pixels) a license plate must be before recognition runs.
|
||||
- Default: `1000` pixels. Note: this is intentionally set very low as it is an _area_ measurement (length x width). For reference, 1000 pixels represents a ~32x32 pixel square in your camera image.
|
||||
- Depending on the resolution of your camera's `detect` stream, you can increase this value to ignore small or distant plates.
|
||||
@ -137,17 +137,86 @@ lpr:
|
||||
- "MN D3163"
|
||||
```
|
||||
|
||||
:::note
|
||||
|
||||
If you want to detect cars on cameras but don't want to use resources to run LPR on those cars, you should disable LPR for those specific cameras.
|
||||
|
||||
```yaml
|
||||
cameras:
|
||||
side_yard:
|
||||
lpr:
|
||||
enabled: False
|
||||
...
|
||||
```
|
||||
|
||||
:::
|
||||
|
||||
## Dedicated LPR Cameras
|
||||
|
||||
Dedicated LPR cameras are single-purpose cameras with powerful optical zoom to capture license plates on distant vehicles, often with fine-tuned settings to capture plates at night.
|
||||
|
||||
Users with a dedicated LPR camera can run Frigate's LPR by specifying a camera type of `lpr` in the camera configuration. An example config for a dedicated LPR camera might look like this:
|
||||
Users can configure Frigate's LPR in two different ways depending on whether they are using a Frigate+ model:
|
||||
|
||||
### Using a Frigate+ Model
|
||||
|
||||
Users running a Frigate+ model (or any model that natively detects `license_plate`) can take advantage of `license_plate` detection. This allows license plates to be treated as standard objects in dedicated LPR mode, meaning that alerts, detections, snapshots, zones, and other Frigate features work as usual, and plates are detected efficiently through your configured object detector.
|
||||
|
||||
An example configuration for a dedicated LPR camera using a Frigate+ model:
|
||||
|
||||
```yaml
|
||||
# LPR global configuration
|
||||
lpr:
|
||||
enabled: True
|
||||
|
||||
# Dedicated LPR camera configuration
|
||||
cameras:
|
||||
dedicated_lpr_camera:
|
||||
type: "lpr" # required to use dedicated LPR camera mode
|
||||
detect:
|
||||
enabled: True
|
||||
fps: 5 # increase if vehicles move quickly
|
||||
min_initialized: 2 # set at fps divided by 3 for very fast cars
|
||||
width: 1920
|
||||
height: 1080
|
||||
objects:
|
||||
track:
|
||||
- license_plate
|
||||
filters:
|
||||
license_plate:
|
||||
threshold: 0.7
|
||||
motion:
|
||||
threshold: 30
|
||||
contour_area: 60 # use an increased value to tune out small motion changes
|
||||
improve_contrast: false
|
||||
mask: 0.704,0.007,0.709,0.052,0.989,0.055,0.993,0.001 # ensure your camera's timestamp is masked
|
||||
record:
|
||||
enabled: True # disable recording if you only want snapshots
|
||||
snapshots:
|
||||
enabled: True
|
||||
review:
|
||||
detections:
|
||||
labels:
|
||||
- license_plate
|
||||
```
|
||||
|
||||
With this setup:
|
||||
|
||||
- License plates are treated as normal objects in Frigate.
|
||||
- Scores, alerts, detections, snapshots, zones, and object masks work as expected.
|
||||
- Snapshots will have license plate bounding boxes on them.
|
||||
- The `frigate/events` MQTT topic will publish tracked object updates.
|
||||
- Debug view will display `license_plate` bounding boxes.
|
||||
|
||||
### Using the Secondary LPR Pipeline (Without Frigate+)
|
||||
|
||||
If you are not running a Frigate+ model, you can use Frigate’s built-in secondary dedicated LPR pipeline. In this mode, Frigate bypasses the standard object detection pipeline and runs a local license plate detector model on the full frame whenever motion activity occurs.
|
||||
|
||||
An example configuration for a dedicated LPR camera using the secondary pipeline:
|
||||
|
||||
```yaml
|
||||
# LPR global configuration
|
||||
lpr:
|
||||
enabled: True
|
||||
min_plate_length: 4
|
||||
detection_threshold: 0.7 # change if necessary
|
||||
|
||||
# Dedicated LPR camera configuration
|
||||
@ -156,14 +225,15 @@ cameras:
|
||||
type: "lpr" # required to use dedicated LPR camera mode
|
||||
lpr:
|
||||
enabled: True
|
||||
expire_time: 3 # optional, default
|
||||
enhancement: 3 # optional, enhance the image before trying to recognize characters
|
||||
ffmpeg: ...
|
||||
detect:
|
||||
enabled: False # optional, disable Frigate's standard object detection pipeline
|
||||
fps: 5 # keep this at 5, higher values are unnecessary for dedicated LPR mode and could overwhelm the detector
|
||||
enabled: False # disable Frigate's standard object detection pipeline
|
||||
fps: 5 # increase if necessary, though high values may slow down Frigate's enrichments pipeline and use considerable CPU
|
||||
width: 1920
|
||||
height: 1080
|
||||
objects:
|
||||
track: [] # required when not using a Frigate+ model for dedicated LPR mode
|
||||
motion:
|
||||
threshold: 30
|
||||
contour_area: 60 # use an increased value here to tune out small motion changes
|
||||
@ -178,31 +248,38 @@ cameras:
|
||||
default: 7
|
||||
```
|
||||
|
||||
The camera-level `type` setting tells Frigate to treat your camera as a dedicated LPR camera. Setting this option bypasses Frigate's standard object detection pipeline so that a `car` does not need to be detected to run LPR. This dedicated LPR pipeline does not utilize defined zones or object masks, and the license plate detector is always run on the full frame whenever motion activity occurs. If a plate is found, a snapshot at the highest scoring moment is saved as a `car` object, visible in Explore and searchable by the recognized plate via Explore's More Filters.
|
||||
|
||||
An optional config variable for dedicated LPR cameras only, `expire_time`, can be specified under the `lpr` configuration at the camera level to change the time it takes for Frigate to consider a previously tracked plate as expired.
|
||||
|
||||
:::note
|
||||
|
||||
When using `type: "lpr"` for a camera, a non-standard object detection pipeline is used. Any detected license plates on dedicated LPR cameras are treated similarly to manual events in Frigate. Note that for `car` objects with license plates:
|
||||
With this setup:
|
||||
|
||||
- The standard object detection pipeline is bypassed. Any detected license plates on dedicated LPR cameras are treated similarly to manual events in Frigate. You must **not** specify `license_plate` as an object to track.
|
||||
- The license plate detector runs on the full frame whenever motion is detected and processes frames according to your detect `fps` setting.
|
||||
- Review items will always be classified as a `detection`.
|
||||
- Snapshots will always be saved.
|
||||
- Tracked objects are retained according to your retain settings for `record` and `snapshots`.
|
||||
- Zones and object masks cannot be used.
|
||||
- Debug view may not show `license_plate` bounding boxes, even if you are using a Frigate+ model for your standard object detection pipeline.
|
||||
- The `frigate/events` MQTT topic will not publish tracked object updates, though `frigate/reviews` will if recordings are enabled.
|
||||
- Zones and object masks are **not** used.
|
||||
- The `frigate/events` MQTT topic will **not** publish tracked object updates, though `frigate/reviews` will if recordings are enabled.
|
||||
- License plate snapshots are saved at the highest-scoring moment and appear in Explore.
|
||||
- Debug view will not show `license_plate` bounding boxes.
|
||||
|
||||
:::
|
||||
### Summary
|
||||
|
||||
| Feature | Native `license_plate` detecting Model (like Frigate+) | Secondary Pipeline (without native model or Frigate+) |
|
||||
| ----------------------- | ------------------------------------------------------ | --------------------------------------------------------------- |
|
||||
| License Plate Detection | Uses `license_plate` as a tracked object | Runs a dedicated LPR pipeline |
|
||||
| FPS Setting | 5 (increase for fast-moving cars) | 5 (increase for fast-moving cars, but it may use much more CPU) |
|
||||
| Object Detection | Standard Frigate+ detection applies | Bypasses standard object detection |
|
||||
| Zones & Object Masks | Supported | Not supported |
|
||||
| Debug View | May show `license_plate` bounding boxes | May **not** show `license_plate` bounding boxes |
|
||||
| MQTT `frigate/events` | Publishes tracked object updates | Does **not** publish tracked object updates |
|
||||
| Explore | Recognized plates available in More Filters | Recognized plates available in More Filters |
|
||||
|
||||
By selecting the appropriate configuration, users can optimize their dedicated LPR cameras based on whether they are using a Frigate+ model or the secondary LPR pipeline.
|
||||
|
||||
### Best practices for using Dedicated LPR camera mode
|
||||
|
||||
- Tune your motion detection and increase the `contour_area` until you see only larger motion boxes being created as cars pass through the frame (likely somewhere between 50-90 for a 1920x1080 detect stream). Increasing the `contour_area` filters out small areas of motion and will prevent excessive resource use from looking for license plates in frames that don't even have a car passing through it.
|
||||
- Disable the `improve_contrast` motion setting, especially if you are running LPR at night and the frame is mostly dark. This will prevent small pixel changes and smaller areas of motion from triggering license plate detection.
|
||||
- Ensure your camera's timestamp is covered with a motion mask so that it's not incorrectly detected as a license plate.
|
||||
- While not strictly required, it may be beneficial to disable standard object detection on your dedicated LPR camera (`detect` --> `enabled: False`). If you've set the camera type to `"lpr"`, license plate detection will still be performed on the entire frame when motion occurs.
|
||||
- If multiple tracked objects are being produced for the same license plate, you can tweak the `expire_time` to prevent plates from being expired from the view as quickly.
|
||||
- You may need to change your camera settings for a clearer image or decrease your global `recognition_threshold` config if your plates are not being accurately recognized at night.
|
||||
- For non-Frigate+ users, you may need to change your camera settings for a clearer image or decrease your global `recognition_threshold` config if your plates are not being accurately recognized at night.
|
||||
- The secondary pipeline mode runs a local AI model on your CPU to detect plates. Increasing detect `fps` will increase CPU usage proportionally.
|
||||
|
||||
## FAQ
|
||||
|
||||
|
||||
@ -556,8 +556,8 @@ face_recognition:
|
||||
recognition_threshold: 0.9
|
||||
# Optional: Min area of detected face box to consider running face recognition (default: shown below)
|
||||
min_area: 500
|
||||
# Optional: Save images of recognized faces for training (default: shown below)
|
||||
save_attempts: True
|
||||
# Optional: Number of images of recognized faces to save for training (default: shown below)
|
||||
save_attempts: 100
|
||||
# Optional: Apply a blur quality filter to adjust confidence based on the blur level of the image (default: shown below)
|
||||
blur_confidence_filter: True
|
||||
|
||||
|
||||
@ -88,7 +88,9 @@ class CameraState:
|
||||
thickness = 1
|
||||
else:
|
||||
thickness = 2
|
||||
color = self.config.model.colormap[obj["label"]]
|
||||
color = self.config.model.colormap.get(
|
||||
obj["label"], (255, 255, 255)
|
||||
)
|
||||
else:
|
||||
thickness = 1
|
||||
color = (255, 0, 0)
|
||||
@ -110,7 +112,9 @@ class CameraState:
|
||||
and obj["frame_time"] == frame_time
|
||||
):
|
||||
thickness = 5
|
||||
color = self.config.model.colormap[obj["label"]]
|
||||
color = self.config.model.colormap.get(
|
||||
obj["label"], (255, 255, 255)
|
||||
)
|
||||
|
||||
# debug autotracking zooming - show the zoom factor box
|
||||
if (
|
||||
|
||||
@ -21,7 +21,6 @@ from frigate.comms.event_metadata_updater import (
|
||||
EventMetadataPublisher,
|
||||
EventMetadataTypeEnum,
|
||||
)
|
||||
from frigate.config.camera.camera import CameraTypeEnum
|
||||
from frigate.const import CLIPS_DIR
|
||||
from frigate.embeddings.onnx.lpr_embedding import LPR_EMBEDDING_SIZE
|
||||
from frigate.util.builtin import EventsPerSecond
|
||||
@ -972,7 +971,7 @@ class LicensePlateProcessingMixin:
|
||||
(
|
||||
now,
|
||||
camera,
|
||||
"car",
|
||||
"license_plate",
|
||||
event_id,
|
||||
True,
|
||||
plate_score,
|
||||
@ -994,9 +993,7 @@ class LicensePlateProcessingMixin:
|
||||
if not self.config.cameras[camera].lpr.enabled:
|
||||
return
|
||||
|
||||
if not dedicated_lpr and self.config.cameras[camera].type == CameraTypeEnum.lpr:
|
||||
return
|
||||
|
||||
# dedicated LPR cam without frigate+
|
||||
if dedicated_lpr:
|
||||
id = "dedicated-lpr"
|
||||
|
||||
@ -1050,8 +1047,11 @@ class LicensePlateProcessingMixin:
|
||||
else:
|
||||
id = obj_data["id"]
|
||||
|
||||
# don't run for non car objects
|
||||
if obj_data.get("label") != "car":
|
||||
# don't run for non car or non license plate (dedicated lpr with frigate+) objects
|
||||
if (
|
||||
obj_data.get("label") != "car"
|
||||
and obj_data.get("label") != "license_plate"
|
||||
):
|
||||
logger.debug(
|
||||
f"{camera}: Not a processing license plate for non car object."
|
||||
)
|
||||
@ -1131,26 +1131,34 @@ class LicensePlateProcessingMixin:
|
||||
license_plate[0] : license_plate[2],
|
||||
]
|
||||
else:
|
||||
# don't run for object without attributes
|
||||
if not obj_data.get("current_attributes"):
|
||||
# don't run for object without attributes if this isn't dedicated lpr with frigate+
|
||||
if (
|
||||
not obj_data.get("current_attributes")
|
||||
and obj_data.get("label") != "license_plate"
|
||||
):
|
||||
logger.debug(f"{camera}: No attributes to parse.")
|
||||
return
|
||||
|
||||
attributes: list[dict[str, any]] = obj_data.get(
|
||||
"current_attributes", []
|
||||
)
|
||||
for attr in attributes:
|
||||
if attr.get("label") != "license_plate":
|
||||
continue
|
||||
if obj_data.get("label") == "car":
|
||||
attributes: list[dict[str, any]] = obj_data.get(
|
||||
"current_attributes", []
|
||||
)
|
||||
for attr in attributes:
|
||||
if attr.get("label") != "license_plate":
|
||||
continue
|
||||
|
||||
if license_plate is None or attr.get(
|
||||
"score", 0.0
|
||||
) > license_plate.get("score", 0.0):
|
||||
license_plate = attr
|
||||
if license_plate is None or attr.get(
|
||||
"score", 0.0
|
||||
) > license_plate.get("score", 0.0):
|
||||
license_plate = attr
|
||||
|
||||
# no license plates detected in this frame
|
||||
if not license_plate:
|
||||
return
|
||||
# no license plates detected in this frame
|
||||
if not license_plate:
|
||||
return
|
||||
|
||||
# we are using dedicated lpr with frigate+
|
||||
if obj_data.get("label") == "license_plate":
|
||||
license_plate = obj_data
|
||||
|
||||
license_plate_box = license_plate.get("box")
|
||||
|
||||
@ -1160,7 +1168,9 @@ class LicensePlateProcessingMixin:
|
||||
or area(license_plate_box)
|
||||
< self.config.cameras[obj_data["camera"]].lpr.min_area
|
||||
):
|
||||
logger.debug(f"{camera}: Invalid license plate box {license_plate}")
|
||||
logger.debug(
|
||||
f"{camera}: Area for license plate box {area(license_plate_box)} is less than min_area {self.config.cameras[obj_data['camera']].lpr.min_area}"
|
||||
)
|
||||
return
|
||||
|
||||
license_plate_frame = cv2.cvtColor(frame, cv2.COLOR_YUV2BGR_I420)
|
||||
@ -1239,8 +1249,11 @@ class LicensePlateProcessingMixin:
|
||||
)
|
||||
return
|
||||
|
||||
# For LPR cameras, match or assign plate ID using Jaro-Winkler distance
|
||||
if dedicated_lpr:
|
||||
# For dedicated LPR cameras, match or assign plate ID using Jaro-Winkler distance
|
||||
if (
|
||||
dedicated_lpr
|
||||
and "license_plate" not in self.config.cameras[camera].objects.track
|
||||
):
|
||||
plate_id = None
|
||||
|
||||
for existing_id, data in self.detected_license_plates.items():
|
||||
@ -1306,8 +1319,11 @@ class LicensePlateProcessingMixin:
|
||||
(id, top_plate, avg_confidence),
|
||||
)
|
||||
|
||||
if dedicated_lpr:
|
||||
# save the best snapshot
|
||||
# save the best snapshot for dedicated lpr cams not using frigate+
|
||||
if (
|
||||
dedicated_lpr
|
||||
and "license_plate" not in self.config.cameras[camera].objects.track
|
||||
):
|
||||
logger.debug(
|
||||
f"{camera}: Writing snapshot for {id}, {top_plate}, {current_time}"
|
||||
)
|
||||
|
||||
@ -457,7 +457,11 @@ class EmbeddingMaintainer(threading.Thread):
|
||||
|
||||
camera_config = self.config.cameras[camera]
|
||||
|
||||
if not camera_config.type == CameraTypeEnum.lpr:
|
||||
if (
|
||||
camera_config.type != CameraTypeEnum.lpr
|
||||
or "license_plate" in camera_config.objects.track
|
||||
):
|
||||
# we're not a dedicated lpr camera or we are one but we're using frigate+
|
||||
return
|
||||
|
||||
try:
|
||||
|
||||
@ -442,7 +442,7 @@ class TrackedObject:
|
||||
|
||||
if bounding_box:
|
||||
thickness = 2
|
||||
color = self.colormap[self.obj_data["label"]]
|
||||
color = self.colormap.get(self.obj_data["label"], (255, 255, 255))
|
||||
|
||||
# draw the bounding boxes on the frame
|
||||
box = self.thumbnail_data["box"]
|
||||
|
||||
@ -15,6 +15,7 @@ from frigate.camera import CameraMetrics, PTZMetrics
|
||||
from frigate.comms.config_updater import ConfigSubscriber
|
||||
from frigate.comms.inter_process import InterProcessRequestor
|
||||
from frigate.config import CameraConfig, DetectConfig, ModelConfig
|
||||
from frigate.config.camera.camera import CameraTypeEnum
|
||||
from frigate.const import (
|
||||
CACHE_DIR,
|
||||
CACHE_SEGMENT_FORMAT,
|
||||
@ -519,6 +520,7 @@ def track_camera(
|
||||
frame_queue,
|
||||
frame_shape,
|
||||
model_config,
|
||||
config,
|
||||
config.detect,
|
||||
frame_manager,
|
||||
motion_detector,
|
||||
@ -585,6 +587,7 @@ def process_frames(
|
||||
frame_queue: mp.Queue,
|
||||
frame_shape,
|
||||
model_config: ModelConfig,
|
||||
camera_config: CameraConfig,
|
||||
detect_config: DetectConfig,
|
||||
frame_manager: FrameManager,
|
||||
motion_detector: MotionDetector,
|
||||
@ -612,6 +615,29 @@ def process_frames(
|
||||
|
||||
region_min_size = get_min_region_size(model_config)
|
||||
|
||||
attributes_map = model_config.attributes_map
|
||||
all_attributes = model_config.all_attributes
|
||||
|
||||
# remove license_plate from attributes if this camera is a dedicated LPR cam
|
||||
if camera_config.type == CameraTypeEnum.lpr:
|
||||
modified_attributes_map = model_config.attributes_map.copy()
|
||||
|
||||
if (
|
||||
"car" in modified_attributes_map
|
||||
and "license_plate" in modified_attributes_map["car"]
|
||||
):
|
||||
modified_attributes_map["car"] = [
|
||||
attr
|
||||
for attr in modified_attributes_map["car"]
|
||||
if attr != "license_plate"
|
||||
]
|
||||
|
||||
attributes_map = modified_attributes_map
|
||||
|
||||
all_attributes = [
|
||||
attr for attr in model_config.all_attributes if attr != "license_plate"
|
||||
]
|
||||
|
||||
while not stop_event.is_set():
|
||||
_, updated_enabled_config = enabled_config_subscriber.check_for_update()
|
||||
|
||||
@ -805,9 +831,7 @@ def process_frames(
|
||||
# if detection was run on this frame, consolidate
|
||||
if len(regions) > 0:
|
||||
tracked_detections = [
|
||||
d
|
||||
for d in consolidated_detections
|
||||
if d[0] not in model_config.all_attributes
|
||||
d for d in consolidated_detections if d[0] not in all_attributes
|
||||
]
|
||||
# now that we have refined our detections, we need to track objects
|
||||
object_tracker.match_and_update(
|
||||
@ -819,7 +843,7 @@ def process_frames(
|
||||
|
||||
# group the attribute detections based on what label they apply to
|
||||
attribute_detections: dict[str, list[TrackedObjectAttribute]] = {}
|
||||
for label, attribute_labels in model_config.attributes_map.items():
|
||||
for label, attribute_labels in attributes_map.items():
|
||||
attribute_detections[label] = [
|
||||
TrackedObjectAttribute(d)
|
||||
for d in consolidated_detections
|
||||
@ -836,8 +860,7 @@ def process_frames(
|
||||
for attributes in attribute_detections.values():
|
||||
for attribute in attributes:
|
||||
filtered_objects = filter(
|
||||
lambda o: attribute.label
|
||||
in model_config.attributes_map.get(o["label"], []),
|
||||
lambda o: attribute.label in attributes_map.get(o["label"], []),
|
||||
all_objects,
|
||||
)
|
||||
selected_object_id = attribute.find_best_object(filtered_objects)
|
||||
@ -885,7 +908,7 @@ def process_frames(
|
||||
for obj in object_tracker.tracked_objects.values():
|
||||
if obj["frame_time"] == frame_time:
|
||||
thickness = 2
|
||||
color = model_config.colormap[obj["label"]]
|
||||
color = model_config.colormap.get(obj["label"], (255, 255, 255))
|
||||
else:
|
||||
thickness = 1
|
||||
color = (255, 0, 0)
|
||||
|
||||
@ -16,6 +16,7 @@
|
||||
"createFaceLibrary": {
|
||||
"title": "创建人脸库",
|
||||
"desc": "创建一个新的人脸库",
|
||||
"new": "新建人脸",
|
||||
"nextSteps": "建议使用“训练”选项卡为每个检测到的人选择并训练图像。在打好基础前,强烈建议训练仅使用正面图像。而不是从摄像机中识别到的角度拍摄的人脸图像。"
|
||||
},
|
||||
"train": {
|
||||
|
||||
@ -87,9 +87,15 @@
|
||||
"title": "语义搜索",
|
||||
"desc": "Frigate的语义搜索能够让你使用自然语言根据图像本身、自定义的文本描述或自动生成的描述来搜索视频。",
|
||||
"readTheDocumentation": "阅读文档(英文)",
|
||||
"reindexOnStartup": {
|
||||
"label": "启动时重新索引",
|
||||
"desc": "每次启动将重新索引并重新处理所有缩略图和描述。<em>关闭该设置后不要忘记重启!</em>"
|
||||
"reindexNow": {
|
||||
"label": "立即重建索引",
|
||||
"desc": "重建索引将为所有跟踪对象重新生成特征向量。该过程将在后台运行,可能会使CPU满载,所需时间取决于跟踪对象的数量。",
|
||||
"confirmTitle": "确认重建索引",
|
||||
"confirmDesc": "确定要为所有跟踪对象重建特征向量索引吗?此过程将在后台运行,但可能会导致CPU满载并耗费较长时间。您可以在探索页面查看进度。",
|
||||
"confirmButton": "重建索引",
|
||||
"success": "重建索引已成功启动。",
|
||||
"alreadyInProgress": "重建索引已在执行中。",
|
||||
"error": "启动重建索引失败:{{errorMessage}}"
|
||||
},
|
||||
"modelSize": {
|
||||
"label": "模型大小",
|
||||
@ -113,11 +119,11 @@
|
||||
"desc": "用于人脸识别的模型尺寸。",
|
||||
"small": {
|
||||
"title": "小模型",
|
||||
"desc": "使用<em>小模型</em>将采用OpenCV的局部二值模式直方图(LBPH)算法,可在大多数CPU上高效运行。"
|
||||
"desc": "使用<em>小模型</em>将采用FaceNet人脸特征提取模型,可在大多数CPU上高效运行。"
|
||||
},
|
||||
"large": {
|
||||
"title": "大模型",
|
||||
"desc": "使用<em>大模型</em>将采用ArcFace人脸嵌入模型,若适用将自动在GPU上运行。"
|
||||
"desc": "使用<em>大模型</em>将采用ArcFace人脸特征提取模型,若条件允许将自动使用GPU运行。"
|
||||
}
|
||||
}
|
||||
},
|
||||
|
||||
@ -3,7 +3,7 @@
|
||||
"cameras": "摄像头统计 - Frigate",
|
||||
"storage": "存储统计 - Frigate",
|
||||
"general": "常规统计 - Frigate",
|
||||
"features": "功能统计 - Frigate",
|
||||
"enrichments": "增强功能统计 - Frigate",
|
||||
"logs": {
|
||||
"frigate": "Frigate 日志 - Frigate",
|
||||
"go2rtc": "Go2RTC 日志 - Frigate",
|
||||
@ -144,8 +144,9 @@
|
||||
"healthy": "系统运行正常",
|
||||
"reindexingEmbeddings": "正在重新索引嵌入(已完成 {{processed}}%)"
|
||||
},
|
||||
"features": {
|
||||
"title": "功能",
|
||||
"enrichments": {
|
||||
"title": "增强功能",
|
||||
"infPerSecond": "每秒推理次数",
|
||||
"embeddings": {
|
||||
"image_embedding_speed": "图像特征提取速度",
|
||||
"face_embedding_speed": "人脸特征提取速度",
|
||||
|
||||
@ -472,11 +472,20 @@ export default function LiveDashboardView({
|
||||
} else {
|
||||
grow = "aspect-video";
|
||||
}
|
||||
const streamName =
|
||||
currentGroupStreamingSettings?.[camera.name]?.streamName ||
|
||||
camera?.live?.streams
|
||||
? Object?.values(camera?.live?.streams)?.[0]
|
||||
: "";
|
||||
const availableStreams = camera.live.streams || {};
|
||||
const firstStreamEntry = Object.values(availableStreams)[0] || "";
|
||||
|
||||
const streamNameFromSettings =
|
||||
currentGroupStreamingSettings?.[camera.name]?.streamName || "";
|
||||
const streamExists =
|
||||
streamNameFromSettings &&
|
||||
Object.values(availableStreams).includes(
|
||||
streamNameFromSettings,
|
||||
);
|
||||
|
||||
const streamName = streamExists
|
||||
? streamNameFromSettings
|
||||
: firstStreamEntry;
|
||||
const autoLive =
|
||||
currentGroupStreamingSettings?.[camera.name]?.streamType !==
|
||||
"no-streaming";
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user