Compare commits

...

5 Commits

Author SHA1 Message Date
Josh Hawkins
cb27bdb2f7
LPR fixes (#17588)
* docs

* docs

* docs

* docs

* fix box merging logic

* always run paddleocr models on cpu

* docs clarity

* fix docs

* docs
2025-04-07 15:25:46 -05:00
Josh Hawkins
f2840468b4
Fix websocket enabled state (#17585)
* use camera activity hook instead of websocket enabled state

* use original websocket but use ternary operator instead
2025-04-07 12:24:53 -05:00
GuoQing Liu
f978ef2cda
Update README_CN (#17581)
* Update README_CN.md

* Update README_CN.md

* Update README_CN.md
2025-04-07 08:32:43 -06:00
Josh Hawkins
676180afca
Update translation docs for weblate (#17579)
* Update translation docs for weblate

* link
2025-04-07 08:12:42 -05:00
Josh Hawkins
a29d4382e1
Catch crash when openai compatible endpoints don't return results correctly (#17572) 2025-04-07 06:59:39 -06:00
7 changed files with 57 additions and 28 deletions

View File

@ -4,12 +4,15 @@
# Frigate - 一个具有实时目标检测的本地NVR
[English](https://github.com/blakeblackshear/frigate) | \[简体中文\]
[English](https://github.com/blakeblackshear/frigate) | \[简体中文\]
<a href="https://hosted.weblate.org/engage/frigate-nvr/-/zh_Hans/">
<img src="https://hosted.weblate.org/widget/frigate-nvr/-/zh_Hans/svg-badge.svg" alt="翻译状态" />
</a>
一个完整的本地网络视频录像机NVR专为[Home Assistant](https://www.home-assistant.io)设计具备AI物体检测功能。使用OpenCV和TensorFlow在本地为IP摄像头执行实时物体检测。
强烈推荐使用可选配件:[Google Coral加速器](https://coral.ai/products/)。在该场景下Coral的性能甚至超过目前的顶级CPU并且可以以极低的电力开销轻松处理100 以上的画面帧。
强烈推荐使用GPU或者AI加速器例如[Google Coral加速器](https://coral.ai/products/) 或者 [Hailo](https://hailo.ai/)。它们的性能甚至超过目前的顶级CPU并且可以以极低的耗电实现更优的性能。
- 通过[自定义组件](https://github.com/blakeblackshear/frigate-hass-integration)与Home Assistant紧密集成
- 设计上通过仅在必要时和必要地点寻找物体,最大限度地减少资源使用并最大化性能
- 大量利用多进程处理,强调实时性而非处理每一帧
@ -25,6 +28,8 @@
你可以在这里查看文档 https://docs.frigate.video
文档还暂时没有提供翻译,将会在未来提供。
## 赞助
如果您想通过捐赠支持开发,请使用 [Github Sponsors](https://github.com/sponsors/blakeblackshear)。
@ -50,3 +55,10 @@
<div>
<img width="800" alt="内置遮罩和区域编辑器" src="https://github.com/blakeblackshear/frigate/assets/569905/d7885fc3-bfe6-452f-b7d0-d957cb3e31f5">
</div>
## 翻译
我们使用 [Weblate](https://hosted.weblate.org/projects/frigate-nvr/) 平台提供翻译支持,欢迎参与进来一起完善。
## 中文讨论社区
欢迎加入非官方中文讨论QQ群1043861059

View File

@ -19,7 +19,7 @@ When a plate is recognized, the recognized name is:
Users running a Frigate+ model (or any custom model that natively detects license plates) should ensure that `license_plate` is added to the [list of objects to track](https://docs.frigate.video/plus/#available-label-types) either globally or for a specific camera. This will improve the accuracy and performance of the LPR model.
Users without a model that detects license plates can still run LPR. Frigate uses a lightweight YOLOv9 license plate detection model that runs on your CPU. In this case, you should _not_ define `license_plate` in your list of objects to track.
Users without a model that detects license plates can still run LPR. Frigate uses a lightweight YOLOv9 license plate detection model that runs on your CPU or GPU. In this case, you should _not_ define `license_plate` in your list of objects to track.
:::note
@ -29,7 +29,7 @@ In the default mode, Frigate's LPR needs to first detect a `car` before it can r
## Minimum System Requirements
License plate recognition works by running AI models locally on your system. The models are relatively lightweight and will be auto-selected to run on your CPU or GPU. At least 4GB of RAM is required.
License plate recognition works by running AI models locally on your system. The models are relatively lightweight and will be auto-selected to run on your CPU. At least 4GB of RAM is required.
## Configuration
@ -40,11 +40,11 @@ lpr:
enabled: True
```
Like other enrichments in Frigate, LPR **must be enabled globally** to use the feature. You can disable it for specific cameras at the camera level:
Like other enrichments in Frigate, LPR **must be enabled globally** to use the feature. You should disable it for specific cameras at the camera level if you don't want to run LPR on cars on those cameras:
```yaml
cameras:
driveway:
garage:
...
lpr:
enabled: False
@ -174,7 +174,7 @@ cameras:
type: "lpr" # required to use dedicated LPR camera mode
detect:
enabled: True
fps: 5 # increase to 10 if vehicles move quickly across your frame
fps: 5 # increase to 10 if vehicles move quickly across your frame. Higher than 10 is unnecessary and is not recommended.
min_initialized: 2
width: 1920
height: 1080
@ -313,6 +313,10 @@ In normal LPR mode, Frigate requires a `car` to be detected first before recogni
Yes, but performance depends on camera quality, lighting, and infrared capabilities. Make sure your camera can capture clear images of plates at night.
### Can I limit LPR to specific zones?
LPR, like other Frigate enrichments, runs at the camera level rather than the zone level. While you can't restrict LPR to specific zones directly, you can control when recognition runs by setting a `min_area` value to filter out smaller detections.
### How can I match known plates with minor variations?
Use `match_distance` to allow small character mismatches. Alternatively, define multiple variations in `known_plates`.
@ -336,3 +340,9 @@ Use `match_distance` to allow small character mismatches. Alternatively, define
### Will LPR slow down my system?
LPR's performance impact depends on your hardware. Ensure you have at least 4GB RAM and a capable CPU or GPU for optimal results. If you are running the Dedicated LPR Camera mode, resource usage will be higher compared to users who run a model that natively detects license plates. Tune your motion detection settings for your dedicated LPR camera so that the license plate detection model runs only when necessary.
### I am seeing a YOLOv9 plate detection metric in Enrichment Metrics, but I have a Frigate+ or custom model that detects `license_plate`. Why is the YOLOv9 model running?
The YOLOv9 license plate detector model will run (and the metric will appear) if you've enabled LPR but haven't defined `license_plate` as an object to track, either at the global or camera level.
If you are detecting `car` on cameras where you don't want to run LPR, make sure you disable LPR it at the camera level. And if you do want to run LPR on those cameras, make sure you define `license_plate` as an object to track.

View File

@ -239,11 +239,8 @@ sudo cp docker/main/rootfs/usr/local/nginx/conf/* /usr/local/nginx/conf/ && sudo
## Contributing translations of the Web UI
If you'd like to contribute translations to Frigate, please follow these steps:
Frigate uses [Weblate](https://weblate.org) to manage translations of the Web UI. To contribute translation, sign up for an account at Weblate and navigate to the Frigate NVR project:
1. Fork the repository and create a new branch specifically for your translation work
2. Locate the localization files in the web/public/locales directory
3. Add or modify the appropriate language JSON files, maintaining the existing key structure while translating only the values
4. Ensure your translations maintain proper formatting, including any placeholder variables (like `{{example}}`)
5. Before submitting, thoroughly review the UI
6. When creating your PR, include a brief description of the languages you've added or updated, and reference any related issues
https://hosted.weblate.org/projects/frigate-nvr/
When translating, maintain the existing key structure while translating only the values. Ensure your translations maintain proper formatting, including any placeholder variables (like `{{example}}`).

View File

@ -309,7 +309,11 @@ class LicensePlateProcessingMixin:
return image.transpose((2, 0, 1))[np.newaxis, ...]
def _merge_nearby_boxes(
self, boxes: List[np.ndarray], plate_width: float, gap_fraction: float = 0.1
self,
boxes: List[np.ndarray],
plate_width: float,
gap_fraction: float = 0.1,
min_overlap_fraction: float = -0.2,
) -> List[np.ndarray]:
"""
Merge bounding boxes that are likely part of the same license plate based on proximity,
@ -329,6 +333,7 @@ class LicensePlateProcessingMixin:
return []
max_gap = plate_width * gap_fraction
min_overlap = plate_width * min_overlap_fraction
# Sort boxes by top left x
sorted_boxes = sorted(boxes, key=lambda x: x[0][0])
@ -353,9 +358,10 @@ class LicensePlateProcessingMixin:
next_bottom = np.max(next_box[:, 1])
# Consider boxes part of the same plate if they are close horizontally or overlap
if horizontal_gap <= max_gap and max(current_top, next_top) <= min(
current_bottom, next_bottom
):
# within the allowed limit and their vertical positions overlap significantly
if min_overlap <= horizontal_gap <= max_gap and max(
current_top, next_top
) <= min(current_bottom, next_bottom):
merged_points = np.vstack((current_box, next_box))
new_box = np.array(
[
@ -379,7 +385,7 @@ class LicensePlateProcessingMixin:
)
current_box = new_box
else:
# If the boxes are not close enough, add the current box to the result
# If the boxes are not close enough or overlap too much, add the current box to the result
merged_boxes.append(current_box)
current_box = next_box

View File

@ -12,13 +12,13 @@ class LicensePlateModelRunner(DataProcessorModelRunner):
def __init__(self, requestor, device: str = "CPU", model_size: str = "large"):
super().__init__(requestor, device, model_size)
self.detection_model = PaddleOCRDetection(
model_size=model_size, requestor=requestor, device=device
model_size=model_size, requestor=requestor, device="CPU"
)
self.classification_model = PaddleOCRClassification(
model_size=model_size, requestor=requestor, device=device
model_size=model_size, requestor=requestor, device="CPU"
)
self.recognition_model = PaddleOCRRecognition(
model_size=model_size, requestor=requestor, device=device
model_size=model_size, requestor=requestor, device="CPU"
)
self.yolov9_detection_model = LicensePlateDetector(
model_size=model_size, requestor=requestor, device=device

View File

@ -54,9 +54,13 @@ class OpenAIClient(GenAIClient):
],
timeout=self.timeout,
)
except TimeoutException as e:
if (
result is not None
and hasattr(result, "choices")
and len(result.choices) > 0
):
return result.choices[0].message.content.strip()
return None
except (TimeoutException, Exception) as e:
logger.warning("OpenAI returned an error: %s", str(e))
return None
if len(result.choices) > 0:
return result.choices[0].message.content.strip()
return None

View File

@ -28,7 +28,7 @@ export default function CameraImage({
const { name } = config ? config.cameras[camera] : "";
const { payload: enabledState } = useEnabledState(camera);
const enabled = enabledState === "ON" || enabledState === undefined;
const enabled = enabledState ? enabledState === "ON" : true;
const [{ width: containerWidth, height: containerHeight }] =
useResizeObserver(containerRef);