Compare commits

...

18 Commits

Author SHA1 Message Date
dependabot[bot]
aca6b70ca5
Merge 795e80f76edd68ab4849984a278c1dea29118125 into 213a1fbd00117df03519efc05be7aebfb7d69f98 2025-11-20 12:58:20 -05:00
Josh Hawkins
213a1fbd00
Miscellaneous Fixes (#20951)
* ensure viewer roles are available in create user dialog

* admin-only endpoint to return unmaksed camera paths and go2rtc streams

* remove camera edit dropdown

pushing camera editing from the UI to 0.18

* clean up camera edit form

* rename component for clarity

CameraSettingsView is now CameraReviewSettingsView

* Catch case where user requsts clip for time that has no recordings

* ensure emergency cleanup also sets has_clip on overlapping events

improves https://github.com/blakeblackshear/frigate/discussions/20945

* use debug log instead of info

* update docs to recommend tmpfs

* improve display of in-progress events in explore tracking details

* improve seeking logic in tracking details

mimic the logic of DynamicVideoController

* only use ffprobe for duration to avoid blocking

fixes https://github.com/blakeblackshear/frigate/discussions/20737#discussioncomment-14999869

* Revert "only use ffprobe for duration to avoid blocking"

This reverts commit 8b15078005aebcfc6062fd0f773d749e3f4a4ee8.

* update readme to link to object detector docs

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-11-18 15:33:42 -07:00
Josh Hawkins
fbf4388b37
Miscellaneous Fixes (#20897)
* don't flatten the search result cache when updating

this would cause an infinite swr fetch if something was mutated and then fetch was called again

* Properly sort keys for recording summary in StorageMetrics

* tracked object description box tweaks

* Remove ability to right click on elements inside of face popup

* Update reprocess message

* don't show object track until video metadata is loaded

* fix blue line height calc for in progress events

* Use timeline tab by default for notifications but add a query arg for customization

* Try and improve notification opening behavior

* Reduce review item buffering behavior

* ensure logging config is passed to camera capture and tracker processes

* ensure on demand recording stops when browser closes

* improve active line progress height with resize observer

* remove icons and duplicate find similar link in explore context menu

* fix for initial broken image when creating trigger from explore

* display friendly names for triggers in toasts

* lpr and triggers docs updates

* remove icons from dropdowns in face and classification

* fix comma dangle linter issue

* re-add incorrectly removed face library button icons

* fix sidebar nav links on < 768px desktop layout

* allow text to wrap on mark as reviewed button

* match exact pixels

* clarify LPR docs

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-11-17 08:12:05 -06:00
GuoQing Liu
097673b845
chore: i18n use cache key (#20885)
* chore: i18n use cache key

* Fix indentation in Dockerfile for pip command

* Add build argument for GIT_COMMIT_HASH in CI workflow

* Add short-sha output to action.yml

* Update build args to use short SHA output

* build: use vite .env

* Remove unnecessary newline in Dockerfile

* Define proxy host variable in vite.config.ts

Add a new line to define the proxy host variable.
2025-11-14 09:36:46 -06:00
GuoQing Liu
d56cf59b9a
fix: fix "Always Show Camera Names" label switch id wrong (#20922) 2025-11-14 09:23:43 -06:00
GuoQing Liu
de066d0062
Fix i18n (#20857)
* fix: fix the missing i18n key

* fix: fix trackedObject i18n keys count variable

* fix: fix some pages audio label missing i18n

* fix: add 6214d52 missing variable

* fix: add more missing i18n

* fix: add menu missing key
2025-11-11 17:23:30 -06:00
Nicolas Mowen
f1a05d0f9b
Miscellaneous fixes (#20875)
* Improve stream fetching logic

* Reduce need to revalidate stream info

* fix frigate+ frame submission

* add UI setting to configure jsmpeg fallback timeout

* hide settings dropdown when fullscreen

* Fix arcface running on OpenVINO

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-11-11 17:00:54 -06:00
Josh Hawkins
a623150811
Add Camera Wizard tweaks (#20889)
* digest auth backend

* frontend

* i18n

* update field description language to include note about onvif specific credentials

* mask util helper function

* language

* mask passwords in http-flv and others where a url param is password
2025-11-11 06:46:23 -07:00
Josh Hawkins
e4eac4ac81
Add Camera Wizard improvements (#20876)
* backend api endpoint

* don't add no-credentials version of streams to rtsp_candidates

* frontend types

* improve types

* add optional probe dialog to wizard step 1

* i18n

* form description and field change

* add onvif form description

* match onvif probe pane with other steps in the wizard

* refactor to add probe and snapshot as step 2

* consolidate probe dialog

* don't change dialog size

* radio button style

* refactor to select onvif urls via combobox in step 3

* i18n

* add scrollbar container

* i18n cleanup

* fix button activity indicator

* match test parsing in step 3 with step 2

* hide resolution if both width and height are zero

* use drawer for stream selection on mobile in step 3

* suppress double toasts

* api endpoint description
2025-11-10 15:49:52 -06:00
Josh Hawkins
c371fc0c87
Miscellaneous Fixes (#20866)
* Don't warn when event ids have expired for trigger sync

* Import faster_whisper conditinally to avoid illegal instruction

* Catch OpenVINO runtime error

* fix race condition in detail stream context

navigating between tracked objects in Explore would sometimes prevent the object track from appearing

* Handle case where classification images are deleted

* Adjust default rounded corners on larger screens

* Improve flow handling for classification state

* Remove images when wizard is cancelled

* Improve deletion handling for classes

* Set constraints on review buffers

* Update to support correct data format

* Set minimum duration for recording based review items

* Use friendly name in review genai prompt

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-11-10 10:03:56 -07:00
Nicolas Mowen
99a363c047
Improve classification (#20863) 2025-11-09 16:21:13 -06:00
Nicolas Mowen
a374a60756
Miscellaneous Fixes (#20850)
* Fix wrongly added detection objects to alert

* Fix CudaGraph inverse condition

* Add debug logs

* Formatting
2025-11-09 08:38:38 -06:00
Nicolas Mowen
d41ee4ff88
Miscellaneous Fixes (#20848)
* Fix filtering for classification

* Adjust prompt to account for response tokens

* Correctly return response for reprocess

* Use API response to update data instead of trying to re-parse all of the values

* Implement rename class api

* Fix model deletion / rename dialog

* Remove camera spatial context

* Catch error
2025-11-08 13:13:40 -07:00
Josh Hawkins
c99ada8f6a
Tracked Object Details pane tweaks (#20849)
* use grid view on desktop

* refactor description box to remove buttons and add row of action icon buttons

* add tooltips

* fix trigger creation

when using the search effect to create a trigger, the prefilled object will not exist in the config yet

* i18n

* set max width on thumbnail
2025-11-08 12:26:30 -07:00
Josh Hawkins
01452e4c51
Miscellaneous Fixes (#20841)
* show id field when editing zone

* improve zone capitalization

* Update NPU models and docs

* fix mobilepage in tracked object details

* Use thread lock for openvino to avoid concurrent requests with JinaV2

* fix hashing function to avoid collisions

* remove extra flex div causing overflow

* ensure header stays on top of video controls

* don't smart capitalize friendly names

* Fix incorrect object classification crop

* don't display submit to plus if object doesn't have a snapshot

* check for snapshot and clip in actions menu

* frigate plus submission fix

still show frigate+ section if snapshot has already been submitted and run optimistic update, local state was being overridden

* Don't fail to show 0% when showing classification

* Don't fail on file system error

* Improve title and description for review genai

* fix overflowing truncated review item description in detail stream

* catch events with review items that start after the first timeline entry

review items may start later than events within them, so subtract a padding from the start time in the filter so the start of events are not incorrectly filtered out of the list in the detail stream

* also pad on review end_time

* fix

* change order of timeline zoom buttons on mobile

* use grid to ensure genai title does not cause overflow

* small tweaks

* Cleanup

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-11-08 05:44:30 -07:00
GuoQing Liu
ef19332fe5
Add zones friend name (#20761)
* feat: add zones friendly name

* fix: fix the issue where the input field was empty when there was no friendly_name

* chore: fix the issue where the friendly name would replace spaces with underscores

* docs: update zones docs

* Update web/src/components/settings/ZoneEditPane.tsx

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Add friendly_name option for zone configuration

Added optional friendly name for zones in configuration.

* fix: fix the logical error in the null/empty check for the polygons parameter

* fix: remove the toast name for zones will use the friendly_name instead

* docs: remove emoji tips

* revert: revert zones doc ui tips

* Update docs/docs/configuration/zones.md

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Update docs/docs/configuration/zones.md

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Update docs/docs/configuration/zones.md

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* feat: add friendly zone names to tracking details and lifecycle item descriptions

* chore: lint fix

* refactor: add friendly zone names to timeline entries and clean up unused code

* refactor: add formatList

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-11-07 08:02:06 -06:00
Josh Hawkins
530b69b877
Miscellaneous fixes (#20833)
* remove frigate+ icon from explore grid footer

* add margin

* pointer cursor on event menu items in detail stream

* don't show submit to plus for non-objects and if plus is disabled

* tweak spacing in annotation settings popover

* Fix deletion of classification images and library

* Ensure after creating a class that things are correct

* Fix dialog getting stuck

* Only show the genai summary popup on mobile when timeline is open

* fix audio transcription embedding

* spacing

* hide x icon on restart sheet to prevent closure issues

* prevent x overflow in detail stream on mobile safari

* ensure name is valid for search effect trigger

* add trigger to detail actions menu

* move find similar to actions menu

* Use a column layout for MobilePageContent in PlatformAwareSheet

 This is so the header is outside the scrolling area and the content can grow/scroll independently. This now matches the way it's done in classification

* Skip azure execution provider

* add optional ref to always scroll to top

the more filters in explore was not scrolled to the top on open due to the use of framer motion

* fix title classes on desktop

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-11-07 06:53:27 -07:00
dependabot[bot]
795e80f76e
Bump @docusaurus/plugin-content-docs from 3.8.1 to 3.9.2 in /docs
Bumps [@docusaurus/plugin-content-docs](https://github.com/facebook/docusaurus/tree/HEAD/packages/docusaurus-plugin-content-docs) from 3.8.1 to 3.9.2.
- [Release notes](https://github.com/facebook/docusaurus/releases)
- [Changelog](https://github.com/facebook/docusaurus/blob/main/CHANGELOG.md)
- [Commits](https://github.com/facebook/docusaurus/commits/v3.9.2/packages/docusaurus-plugin-content-docs)

---
updated-dependencies:
- dependency-name: "@docusaurus/plugin-content-docs"
  dependency-version: 3.9.2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-10-20 11:31:47 +00:00
135 changed files with 7540 additions and 3319 deletions

1
.gitignore vendored
View File

@ -15,6 +15,7 @@ frigate/version.py
web/build
web/node_modules
web/coverage
web/.env
core
!/web/**/*.ts
.idea/*

View File

@ -14,6 +14,7 @@ push-boards: $(BOARDS:%=push-%)
version:
echo 'VERSION = "$(VERSION)-$(COMMIT_HASH)"' > frigate/version.py
echo 'VITE_GIT_COMMIT_HASH=$(COMMIT_HASH)' > web/.env
local: version
docker buildx build --target=frigate --file docker/main/Dockerfile . \

View File

@ -12,7 +12,7 @@
A complete and local NVR designed for [Home Assistant](https://www.home-assistant.io) with AI object detection. Uses OpenCV and Tensorflow to perform realtime object detection locally for IP cameras.
Use of a GPU or AI accelerator such as a [Google Coral](https://coral.ai/products/) or [Hailo](https://hailo.ai/) is highly recommended. AI accelerators will outperform even the best CPUs with very little overhead.
Use of a GPU or AI accelerator is highly recommended. AI accelerators will outperform even the best CPUs with very little overhead. See Frigate's supported [object detectors](https://docs.frigate.video/configuration/object_detectors/).
- Tight integration with Home Assistant via a [custom component](https://github.com/blakeblackshear/frigate-hass-integration)
- Designed to minimize resource use and maximize performance by only looking for objects when and where it is necessary

View File

@ -68,36 +68,6 @@ The mere presence of an unidentified person in private areas during late night h
</details>
### Camera Spatial Context
In addition to defining activity patterns, you can provide spatial context for specific cameras to help the LLM generate more accurate and descriptive titles and scene descriptions. The `camera_context` field allows you to describe physical features and locations that are outside the camera's field of view but are relevant for understanding the scene.
**Important Guidelines:**
- This context is used **only for descriptive purposes** to help the LLM write better titles and scene descriptions
- It should describe **physical features and spatial relationships** (e.g., "front door is to the right", "driveway on the left")
- It should **NOT** include subjective assessments or threat evaluations (e.g., "high-crime area")
- Threat level determination remains based solely on observable actions defined in the activity patterns
Example configuration:
```yaml
cameras:
front_door:
review:
genai:
enabled: true
camera_context: |
- Front door entrance is to the right of the frame
- Driveway and street are to the left
- Steps in the center lead from the sidewalk to the front door
- Garage is located beyond the left edge of the frame
```
This helps the LLM generate more natural descriptions like "Person approaching front door" instead of "Person walking toward right side of frame".
The `camera_context` can be defined globally under `genai.review` and overridden per camera for specific spatial details.
### Image Source
By default, review summaries use preview images (cached preview frames) which have a lower resolution but use fewer tokens per image. For better image quality and more detailed analysis, you can configure Frigate to extract frames directly from recordings at a higher resolution:

View File

@ -5,7 +5,7 @@ title: Enrichments
# Enrichments
Some of Frigate's enrichments can use a discrete GPU / NPU for accelerated processing.
Some of Frigate's enrichments can use a discrete GPU or integrated GPU for accelerated processing.
## Requirements
@ -18,8 +18,10 @@ Object detection and enrichments (like Semantic Search, Face Recognition, and Li
- **Intel**
- OpenVINO will automatically be detected and used for enrichments in the default Frigate image.
- **Note:** Intel NPUs have limited model support for enrichments. GPU is recommended for enrichments when available.
- **Nvidia**
- Nvidia GPUs will automatically be detected and used for enrichments in the `-tensorrt` Frigate image.
- Jetson devices will automatically be detected and used for enrichments in the `-tensorrt-jp6` Frigate image.

View File

@ -3,18 +3,18 @@ id: license_plate_recognition
title: License Plate Recognition (LPR)
---
Frigate can recognize license plates on vehicles and automatically add the detected characters to the `recognized_license_plate` field or a known name as a `sub_label` to tracked objects of type `car` or `motorcycle`. A common use case may be to read the license plates of cars pulling into a driveway or cars passing by on a street.
Frigate can recognize license plates on vehicles and automatically add the detected characters to the `recognized_license_plate` field or a [known](#matching) name as a `sub_label` to tracked objects of type `car` or `motorcycle`. A common use case may be to read the license plates of cars pulling into a driveway or cars passing by on a street.
LPR works best when the license plate is clearly visible to the camera. For moving vehicles, Frigate continuously refines the recognition process, keeping the most confident result. When a vehicle becomes stationary, LPR continues to run for a short time after to attempt recognition.
When a plate is recognized, the details are:
- Added as a `sub_label` (if known) or the `recognized_license_plate` field (if unknown) to a tracked object.
- Viewable in the Review Item Details pane in Review (sub labels).
- Added as a `sub_label` (if [known](#matching)) or the `recognized_license_plate` field (if unknown) to a tracked object.
- Viewable in the Details pane in Review/History.
- Viewable in the Tracked Object Details pane in Explore (sub labels and recognized license plates).
- Filterable through the More Filters menu in Explore.
- Published via the `frigate/events` MQTT topic as a `sub_label` (known) or `recognized_license_plate` (unknown) for the `car` or `motorcycle` tracked object.
- Published via the `frigate/tracked_object_update` MQTT topic with `name` (if known) and `plate`.
- Published via the `frigate/events` MQTT topic as a `sub_label` ([known](#matching)) or `recognized_license_plate` (unknown) for the `car` or `motorcycle` tracked object.
- Published via the `frigate/tracked_object_update` MQTT topic with `name` (if [known](#matching)) and `plate`.
## Model Requirements
@ -31,6 +31,7 @@ In the default mode, Frigate's LPR needs to first detect a `car` or `motorcycle`
## Minimum System Requirements
License plate recognition works by running AI models locally on your system. The YOLOv9 plate detector model and the OCR models ([PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)) are relatively lightweight and can run on your CPU or GPU, depending on your configuration. At least 4GB of RAM is required.
## Configuration
License plate recognition is disabled by default. Enable it in your config file:
@ -73,8 +74,8 @@ Fine-tune the LPR feature using these optional parameters at the global level of
- Default: `small`
- This can be `small` or `large`.
- The `small` model is fast and identifies groups of Latin and Chinese characters.
- The `large` model identifies Latin characters only, but uses an enhanced text detector and is more capable at finding characters on multi-line plates. It is significantly slower than the `small` model. Note that using the `large` model does not improve _text recognition_, but it may improve _text detection_.
- For most users, the `small` model is recommended.
- The `large` model identifies Latin characters only, and uses an enhanced text detector to find characters on multi-line plates. It is significantly slower than the `small` model.
- If your country or region does not use multi-line plates, you should use the `small` model as performance is much better for single-line plates.
### Recognition
@ -177,7 +178,7 @@ lpr:
:::note
If you want to detect cars on cameras but don't want to use resources to run LPR on those cars, you should disable LPR for those specific cameras.
If a camera is configured to detect `car` or `motorcycle` but you don't want Frigate to run LPR for that camera, disable LPR at the camera level:
```yaml
cameras:
@ -305,7 +306,7 @@ With this setup:
- Review items will always be classified as a `detection`.
- Snapshots will always be saved.
- Zones and object masks are **not** used.
- The `frigate/events` MQTT topic will **not** publish tracked object updates with the license plate bounding box and score, though `frigate/reviews` will publish if recordings are enabled. If a plate is recognized as a known plate, publishing will occur with an updated `sub_label` field. If characters are recognized, publishing will occur with an updated `recognized_license_plate` field.
- The `frigate/events` MQTT topic will **not** publish tracked object updates with the license plate bounding box and score, though `frigate/reviews` will publish if recordings are enabled. If a plate is recognized as a [known](#matching) plate, publishing will occur with an updated `sub_label` field. If characters are recognized, publishing will occur with an updated `recognized_license_plate` field.
- License plate snapshots are saved at the highest-scoring moment and appear in Explore.
- Debug view will not show `license_plate` bounding boxes.

View File

@ -261,6 +261,8 @@ OpenVINO is supported on 6th Gen Intel platforms (Skylake) and newer. It will al
:::tip
**NPU + GPU Systems:** If you have both NPU and GPU available (Intel Core Ultra processors), use NPU for object detection and GPU for enrichments (semantic search, face recognition, etc.) for best performance and compatibility.
When using many cameras one detector may not be enough to keep up. Multiple detectors can be defined assuming GPU resources are available. An example configuration would be:
```yaml
@ -283,7 +285,7 @@ detectors:
| [RF-DETR](#rf-detr) | ✅ | ✅ | Requires XE iGPU or Arc |
| [YOLO-NAS](#yolo-nas) | ✅ | ✅ | |
| [MobileNet v2](#ssdlite-mobilenet-v2) | ✅ | ✅ | Fast and lightweight model, less accurate than larger models |
| [YOLOX](#yolox) | ✅ | ? | |
| [YOLOX](#yolox) | ✅ | ? | |
| [D-FINE](#d-fine) | ❌ | ❌ | |
#### SSDLite MobileNet v2

View File

@ -810,6 +810,8 @@ cameras:
# NOTE: This must be different than any camera names, but can match with another zone on another
# camera.
front_steps:
# Optional: A friendly name or descriptive text for the zones
friendly_name: ""
# Required: List of x,y coordinates to define the polygon of the zone.
# NOTE: Presence in a zone is evaluated only based on the bottom center of the objects bounding box.
coordinates: 0.033,0.306,0.324,0.138,0.439,0.185,0.042,0.428

View File

@ -78,7 +78,7 @@ Switching between V1 and V2 requires reindexing your embeddings. The embeddings
### GPU Acceleration
The CLIP models are downloaded in ONNX format, and the `large` model can be accelerated using GPU / NPU hardware, when available. This depends on the Docker build that is used. You can also target a specific device in a multi-GPU installation.
The CLIP models are downloaded in ONNX format, and the `large` model can be accelerated using GPU hardware, when available. This depends on the Docker build that is used. You can also target a specific device in a multi-GPU installation.
```yaml
semantic_search:
@ -90,7 +90,7 @@ semantic_search:
:::info
If the correct build is used for your GPU / NPU and the `large` model is configured, then the GPU / NPU will be detected and used automatically.
If the correct build is used for your GPU / NPU and the `large` model is configured, then the GPU will be detected and used automatically.
Specify the `device` option to target a specific GPU in a multi-GPU system (see [onnxruntime's provider options](https://onnxruntime.ai/docs/execution-providers/)).
If you do not specify a device, the first available GPU will be used.
@ -141,7 +141,7 @@ Triggers are best configured through the Frigate UI.
Check the `Add Attribute` box to add the trigger's internal ID (e.g., "red_car_alert") to a data attribute on the tracked object that can be processed via the API or MQTT.
5. Save the trigger to update the configuration and store the embedding in the database.
When a trigger fires, the UI highlights the trigger with a blue dot for 3 seconds for easy identification.
When a trigger fires, the UI highlights the trigger with a blue dot for 3 seconds for easy identification. Additionally, the UI will show the last date/time and tracked object ID that activated your trigger. The last triggered timestamp is not saved to the database or persisted through restarts of Frigate.
### Usage and Best Practices

View File

@ -27,6 +27,7 @@ cameras:
- entire_yard
zones:
entire_yard:
friendly_name: Entire yard # You can use characters from any language text
coordinates: ...
```
@ -44,8 +45,10 @@ cameras:
- edge_yard
zones:
edge_yard:
friendly_name: Edge yard # You can use characters from any language text
coordinates: ...
inner_yard:
friendly_name: Inner yard # You can use characters from any language text
coordinates: ...
```
@ -59,6 +62,7 @@ cameras:
- entire_yard
zones:
entire_yard:
friendly_name: Entire yard
coordinates: ...
```
@ -82,6 +86,7 @@ cameras:
Only car objects can trigger the `front_yard_street` zone and only person can trigger the `entire_yard`. Objects will be tracked for any `person` that enter anywhere in the yard, and for cars only if they enter the street.
### Zone Loitering
Sometimes objects are expected to be passing through a zone, but an object loitering in an area is unexpected. Zones can be configured to have a minimum loitering time after which the object will be considered in the zone.

View File

@ -56,7 +56,7 @@ services:
volumes:
- /path/to/your/config:/config
- /path/to/your/storage:/media/frigate
- type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
- type: tmpfs # Recommended: 1GB of memory
target: /tmp/cache
tmpfs:
size: 1000000000
@ -310,7 +310,7 @@ services:
- /etc/localtime:/etc/localtime:ro
- /path/to/your/config:/config
- /path/to/your/storage:/media/frigate
- type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
- type: tmpfs # Recommended: 1GB of memory
target: /tmp/cache
tmpfs:
size: 1000000000

998
docs/package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@ -18,7 +18,7 @@
},
"dependencies": {
"@docusaurus/core": "^3.7.0",
"@docusaurus/plugin-content-docs": "^3.6.3",
"@docusaurus/plugin-content-docs": "^3.9.2",
"@docusaurus/preset-classic": "^3.7.0",
"@docusaurus/theme-mermaid": "^3.6.3",
"@inkeep/docusaurus": "^2.0.16",

View File

@ -179,6 +179,36 @@ def config(request: Request):
return JSONResponse(content=config)
@router.get("/config/raw_paths", dependencies=[Depends(require_role(["admin"]))])
def config_raw_paths(request: Request):
"""Admin-only endpoint that returns camera paths and go2rtc streams without credential masking."""
config_obj: FrigateConfig = request.app.frigate_config
raw_paths = {"cameras": {}, "go2rtc": {"streams": {}}}
# Extract raw camera ffmpeg input paths
for camera_name, camera in config_obj.cameras.items():
raw_paths["cameras"][camera_name] = {
"ffmpeg": {
"inputs": [
{"path": input.path, "roles": input.roles}
for input in camera.ffmpeg.inputs
]
}
}
# Extract raw go2rtc stream URLs
go2rtc_config = config_obj.go2rtc.model_dump(
mode="json", warnings="none", exclude_none=True
)
for stream_name, stream in go2rtc_config.get("streams", {}).items():
if stream is None:
continue
raw_paths["go2rtc"]["streams"][stream_name] = stream
return JSONResponse(content=raw_paths)
@router.get("/config/raw")
def config_raw():
config_file = find_config_file()

View File

@ -3,11 +3,17 @@
import json
import logging
import re
from importlib.util import find_spec
from pathlib import Path
from urllib.parse import quote_plus
import httpx
import requests
from fastapi import APIRouter, Depends, Request, Response
from fastapi import APIRouter, Depends, Query, Request, Response
from fastapi.responses import JSONResponse
from onvif import ONVIFCamera, ONVIFError
from zeep.exceptions import Fault, TransportError
from zeep.transports import AsyncTransport
from frigate.api.auth import require_role
from frigate.api.defs.tags import Tags
@ -452,3 +458,537 @@ def _extract_fps(r_frame_rate: str) -> float | None:
return round(float(num) / float(den), 2)
except (ValueError, ZeroDivisionError):
return None
@router.get(
"/onvif/probe",
dependencies=[Depends(require_role(["admin"]))],
summary="Probe ONVIF device",
description=(
"Probe an ONVIF device to determine capabilities and optionally test available stream URIs. "
"Query params: host (required), port (default 80), username, password, test (boolean), "
"auth_type (basic or digest, default basic)."
),
)
async def onvif_probe(
request: Request,
host: str = Query(None),
port: int = Query(80),
username: str = Query(""),
password: str = Query(""),
test: bool = Query(False),
auth_type: str = Query("basic"), # Add auth_type parameter
):
"""
Probe a single ONVIF device to determine capabilities.
Connects to an ONVIF device and queries for:
- Device information (manufacturer, model)
- Media profiles count
- PTZ support
- Available presets
- Autotracking support
Query Parameters:
host: Device host/IP address (required)
port: Device port (default 80)
username: ONVIF username (optional)
password: ONVIF password (optional)
test: run ffprobe on the stream (optional)
auth_type: Authentication type - "basic" or "digest" (default "basic")
Returns:
JSON with device capabilities information
"""
if not host:
return JSONResponse(
content={"success": False, "message": "host parameter is required"},
status_code=400,
)
# Validate host format
if not _is_valid_host(host):
return JSONResponse(
content={"success": False, "message": "Invalid host format"},
status_code=400,
)
# Validate auth_type
if auth_type not in ["basic", "digest"]:
return JSONResponse(
content={
"success": False,
"message": "auth_type must be 'basic' or 'digest'",
},
status_code=400,
)
onvif_camera = None
try:
logger.debug(f"Probing ONVIF device at {host}:{port} with {auth_type} auth")
try:
wsdl_base = None
spec = find_spec("onvif")
if spec and getattr(spec, "origin", None):
wsdl_base = str(Path(spec.origin).parent / "wsdl")
except Exception:
wsdl_base = None
onvif_camera = ONVIFCamera(
host, port, username or "", password or "", wsdl_dir=wsdl_base
)
# Configure digest authentication if requested
if auth_type == "digest" and username and password:
# Create httpx client with digest auth
auth = httpx.DigestAuth(username, password)
client = httpx.AsyncClient(auth=auth, timeout=10.0)
# Replace the transport in the zeep client
transport = AsyncTransport(client=client)
# Update the xaddr before setting transport
await onvif_camera.update_xaddrs()
# Replace transport in all services
if hasattr(onvif_camera, "devicemgmt"):
onvif_camera.devicemgmt.zeep_client.transport = transport
if hasattr(onvif_camera, "media"):
onvif_camera.media.zeep_client.transport = transport
if hasattr(onvif_camera, "ptz"):
onvif_camera.ptz.zeep_client.transport = transport
logger.debug("Configured digest authentication")
else:
await onvif_camera.update_xaddrs()
# Get device information
device_info = {
"manufacturer": "Unknown",
"model": "Unknown",
"firmware_version": "Unknown",
}
try:
device_service = await onvif_camera.create_devicemgmt_service()
# Update transport for device service if digest auth
if auth_type == "digest" and username and password:
auth = httpx.DigestAuth(username, password)
client = httpx.AsyncClient(auth=auth, timeout=10.0)
transport = AsyncTransport(client=client)
device_service.zeep_client.transport = transport
device_info_resp = await device_service.GetDeviceInformation()
manufacturer = getattr(device_info_resp, "Manufacturer", None) or (
device_info_resp.get("Manufacturer")
if isinstance(device_info_resp, dict)
else None
)
model = getattr(device_info_resp, "Model", None) or (
device_info_resp.get("Model")
if isinstance(device_info_resp, dict)
else None
)
firmware = getattr(device_info_resp, "FirmwareVersion", None) or (
device_info_resp.get("FirmwareVersion")
if isinstance(device_info_resp, dict)
else None
)
device_info.update(
{
"manufacturer": manufacturer or "Unknown",
"model": model or "Unknown",
"firmware_version": firmware or "Unknown",
}
)
except Exception as e:
logger.debug(f"Failed to get device info: {e}")
# Get media profiles
profiles = []
profiles_count = 0
first_profile_token = None
ptz_config_token = None
try:
media_service = await onvif_camera.create_media_service()
# Update transport for media service if digest auth
if auth_type == "digest" and username and password:
auth = httpx.DigestAuth(username, password)
client = httpx.AsyncClient(auth=auth, timeout=10.0)
transport = AsyncTransport(client=client)
media_service.zeep_client.transport = transport
profiles = await media_service.GetProfiles()
profiles_count = len(profiles) if profiles else 0
if profiles and len(profiles) > 0:
p = profiles[0]
first_profile_token = getattr(p, "token", None) or (
p.get("token") if isinstance(p, dict) else None
)
# Get PTZ configuration token from the profile
ptz_configuration = getattr(p, "PTZConfiguration", None) or (
p.get("PTZConfiguration") if isinstance(p, dict) else None
)
if ptz_configuration:
ptz_config_token = getattr(ptz_configuration, "token", None) or (
ptz_configuration.get("token")
if isinstance(ptz_configuration, dict)
else None
)
except Exception as e:
logger.debug(f"Failed to get media profiles: {e}")
# Check PTZ support and capabilities
ptz_supported = False
presets_count = 0
autotrack_supported = False
try:
ptz_service = await onvif_camera.create_ptz_service()
# Update transport for PTZ service if digest auth
if auth_type == "digest" and username and password:
auth = httpx.DigestAuth(username, password)
client = httpx.AsyncClient(auth=auth, timeout=10.0)
transport = AsyncTransport(client=client)
ptz_service.zeep_client.transport = transport
# Check if PTZ service is available
try:
await ptz_service.GetServiceCapabilities()
ptz_supported = True
logger.debug("PTZ service is available")
except Exception as e:
logger.debug(f"PTZ service not available: {e}")
ptz_supported = False
# Try to get presets if PTZ is supported and we have a profile
if ptz_supported and first_profile_token:
try:
presets_resp = await ptz_service.GetPresets(
{"ProfileToken": first_profile_token}
)
presets_count = len(presets_resp) if presets_resp else 0
logger.debug(f"Found {presets_count} presets")
except Exception as e:
logger.debug(f"Failed to get presets: {e}")
presets_count = 0
# Check for autotracking support - requires both FOV relative movement and MoveStatus
if ptz_supported and first_profile_token and ptz_config_token:
# First check for FOV relative movement support
pt_r_fov_supported = False
try:
config_request = ptz_service.create_type("GetConfigurationOptions")
config_request.ConfigurationToken = ptz_config_token
ptz_config = await ptz_service.GetConfigurationOptions(
config_request
)
if ptz_config:
# Check for pt-r-fov support
spaces = getattr(ptz_config, "Spaces", None) or (
ptz_config.get("Spaces")
if isinstance(ptz_config, dict)
else None
)
if spaces:
rel_pan_tilt_space = getattr(
spaces, "RelativePanTiltTranslationSpace", None
) or (
spaces.get("RelativePanTiltTranslationSpace")
if isinstance(spaces, dict)
else None
)
if rel_pan_tilt_space:
# Look for FOV space
for i, space in enumerate(rel_pan_tilt_space):
uri = None
if isinstance(space, dict):
uri = space.get("URI")
else:
uri = getattr(space, "URI", None)
if uri and "TranslationSpaceFov" in uri:
pt_r_fov_supported = True
logger.debug(
"FOV relative movement (pt-r-fov) supported"
)
break
logger.debug(f"PTZ config spaces: {ptz_config}")
except Exception as e:
logger.debug(f"Failed to check FOV relative movement: {e}")
pt_r_fov_supported = False
# Now check for MoveStatus support via GetServiceCapabilities
if pt_r_fov_supported:
try:
service_capabilities_request = ptz_service.create_type(
"GetServiceCapabilities"
)
service_capabilities = await ptz_service.GetServiceCapabilities(
service_capabilities_request
)
# Look for MoveStatus in the capabilities
move_status_capable = False
if service_capabilities:
# Try to find MoveStatus key recursively
def find_move_status(obj, key="MoveStatus"):
if isinstance(obj, dict):
if key in obj:
return obj[key]
for v in obj.values():
result = find_move_status(v, key)
if result is not None:
return result
elif hasattr(obj, key):
return getattr(obj, key)
elif hasattr(obj, "__dict__"):
for v in vars(obj).values():
result = find_move_status(v, key)
if result is not None:
return result
return None
move_status_value = find_move_status(service_capabilities)
# MoveStatus should return "true" if supported
if isinstance(move_status_value, bool):
move_status_capable = move_status_value
elif isinstance(move_status_value, str):
move_status_capable = (
move_status_value.lower() == "true"
)
logger.debug(f"MoveStatus capability: {move_status_value}")
# Autotracking is supported if both conditions are met
autotrack_supported = pt_r_fov_supported and move_status_capable
if autotrack_supported:
logger.debug(
"Autotracking fully supported (pt-r-fov + MoveStatus)"
)
else:
logger.debug(
f"Autotracking not fully supported - pt-r-fov: {pt_r_fov_supported}, MoveStatus: {move_status_capable}"
)
except Exception as e:
logger.debug(f"Failed to check MoveStatus support: {e}")
autotrack_supported = False
except Exception as e:
logger.debug(f"Failed to probe PTZ service: {e}")
result = {
"success": True,
"host": host,
"port": port,
"manufacturer": device_info["manufacturer"],
"model": device_info["model"],
"firmware_version": device_info["firmware_version"],
"profiles_count": profiles_count,
"ptz_supported": ptz_supported,
"presets_count": presets_count,
"autotrack_supported": autotrack_supported,
}
# Gather RTSP candidates
rtsp_candidates: list[dict] = []
try:
media_service = await onvif_camera.create_media_service()
# Update transport for media service if digest auth
if auth_type == "digest" and username and password:
auth = httpx.DigestAuth(username, password)
client = httpx.AsyncClient(auth=auth, timeout=10.0)
transport = AsyncTransport(client=client)
media_service.zeep_client.transport = transport
if profiles_count and media_service:
for p in profiles or []:
token = getattr(p, "token", None) or (
p.get("token") if isinstance(p, dict) else None
)
if not token:
continue
try:
stream_setup = {
"Stream": "RTP-Unicast",
"Transport": {"Protocol": "RTSP"},
}
stream_req = {
"ProfileToken": token,
"StreamSetup": stream_setup,
}
stream_uri_resp = await media_service.GetStreamUri(stream_req)
uri = (
stream_uri_resp.get("Uri")
if isinstance(stream_uri_resp, dict)
else getattr(stream_uri_resp, "Uri", None)
)
if uri:
logger.debug(
f"GetStreamUri returned for token {token}: {uri}"
)
# If credentials were provided, do NOT add the unauthenticated URI.
try:
if isinstance(uri, str) and uri.startswith("rtsp://"):
if username and password and "@" not in uri:
# Inject URL-encoded credentials and add only the
# authenticated version.
cred = f"{quote_plus(username)}:{quote_plus(password)}@"
injected = uri.replace(
"rtsp://", f"rtsp://{cred}", 1
)
rtsp_candidates.append(
{
"source": "GetStreamUri",
"profile_token": token,
"uri": injected,
}
)
else:
# No credentials provided or URI already contains
# credentials — add the URI as returned.
rtsp_candidates.append(
{
"source": "GetStreamUri",
"profile_token": token,
"uri": uri,
}
)
else:
# Non-RTSP URIs (e.g., http-flv) — add as returned.
rtsp_candidates.append(
{
"source": "GetStreamUri",
"profile_token": token,
"uri": uri,
}
)
except Exception as e:
logger.debug(
f"Skipping stream URI for token {token} due to processing error: {e}"
)
continue
except Exception:
logger.debug(
f"GetStreamUri failed for token {token}", exc_info=True
)
continue
# Add common RTSP patterns as fallback
if not rtsp_candidates:
common_paths = [
"/h264",
"/live.sdp",
"/media.amp",
"/Streaming/Channels/101",
"/Streaming/Channels/1",
"/stream1",
"/cam/realmonitor?channel=1&subtype=0",
"/11",
]
# Use URL-encoded credentials for pattern fallback URIs when provided
auth_str = (
f"{quote_plus(username)}:{quote_plus(password)}@"
if username and password
else ""
)
rtsp_port = 554
for path in common_paths:
uri = f"rtsp://{auth_str}{host}:{rtsp_port}{path}"
rtsp_candidates.append({"source": "pattern", "uri": uri})
except Exception:
logger.debug("Failed to collect RTSP candidates")
# Optionally test RTSP candidates using ffprobe_stream
tested_candidates = []
if test and rtsp_candidates:
for c in rtsp_candidates:
uri = c["uri"]
to_test = [uri]
try:
if (
username
and password
and isinstance(uri, str)
and uri.startswith("rtsp://")
and "@" not in uri
):
cred = f"{quote_plus(username)}:{quote_plus(password)}@"
cred_uri = uri.replace("rtsp://", f"rtsp://{cred}", 1)
if cred_uri not in to_test:
to_test.append(cred_uri)
except Exception:
pass
for test_uri in to_test:
try:
probe = ffprobe_stream(
request.app.frigate_config.ffmpeg, test_uri, detailed=False
)
print(probe)
ok = probe is not None and getattr(probe, "returncode", 1) == 0
tested_candidates.append(
{
"uri": test_uri,
"source": c.get("source"),
"ok": ok,
"profile_token": c.get("profile_token"),
}
)
except Exception as e:
logger.debug(f"Unable to probe stream: {e}")
tested_candidates.append(
{
"uri": test_uri,
"source": c.get("source"),
"ok": False,
"profile_token": c.get("profile_token"),
}
)
result["rtsp_candidates"] = rtsp_candidates
if test:
result["rtsp_tested"] = tested_candidates
logger.debug(f"ONVIF probe successful: {result}")
return JSONResponse(content=result)
except ONVIFError as e:
logger.warning(f"ONVIF error probing {host}:{port}: {e}")
return JSONResponse(
content={"success": False, "message": "ONVIF error"},
status_code=400,
)
except (Fault, TransportError) as e:
logger.warning(f"Connection error probing {host}:{port}: {e}")
return JSONResponse(
content={"success": False, "message": "Connection error"},
status_code=503,
)
except Exception as e:
logger.warning(f"Error probing ONVIF device at {host}:{port}, {e}")
return JSONResponse(
content={"success": False, "message": "Probe failed"},
status_code=500,
)
finally:
# Best-effort cleanup of ONVIF camera client session
if onvif_camera is not None:
try:
# Check if the camera has a close method and call it
if hasattr(onvif_camera, "close"):
await onvif_camera.close()
except Exception as e:
logger.debug(f"Error closing ONVIF camera session: {e}")

View File

@ -37,6 +37,8 @@ from frigate.models import Event
from frigate.util.classification import (
collect_object_classification_examples,
collect_state_classification_examples,
get_dataset_image_count,
read_training_metadata,
)
from frigate.util.file import get_event_snapshot
@ -112,9 +114,18 @@ def reclassify_face(request: Request, body: dict = None):
context: EmbeddingsContext = request.app.embeddings
response = context.reprocess_face(training_file)
if not isinstance(response, dict):
return JSONResponse(
status_code=500,
content={
"success": False,
"message": "Could not process request.",
},
)
return JSONResponse(
status_code=200 if response.get("success", True) else 400,
content=response,
status_code=200,
)
@ -555,23 +566,59 @@ def get_classification_dataset(name: str):
dataset_dir = os.path.join(CLIPS_DIR, sanitize_filename(name), "dataset")
if not os.path.exists(dataset_dir):
return JSONResponse(status_code=200, content={})
return JSONResponse(
status_code=200, content={"categories": {}, "training_metadata": None}
)
for name in os.listdir(dataset_dir):
category_dir = os.path.join(dataset_dir, name)
for category_name in os.listdir(dataset_dir):
category_dir = os.path.join(dataset_dir, category_name)
if not os.path.isdir(category_dir):
continue
dataset_dict[name] = []
dataset_dict[category_name] = []
for file in filter(
lambda f: (f.lower().endswith((".webp", ".png", ".jpg", ".jpeg"))),
os.listdir(category_dir),
):
dataset_dict[name].append(file)
dataset_dict[category_name].append(file)
return JSONResponse(status_code=200, content=dataset_dict)
# Get training metadata
metadata = read_training_metadata(sanitize_filename(name))
current_image_count = get_dataset_image_count(sanitize_filename(name))
if metadata is None:
training_metadata = {
"has_trained": False,
"last_training_date": None,
"last_training_image_count": 0,
"current_image_count": current_image_count,
"new_images_count": current_image_count,
"dataset_changed": current_image_count > 0,
}
else:
last_training_count = metadata.get("last_training_image_count", 0)
# Dataset has changed if count is different (either added or deleted images)
dataset_changed = current_image_count != last_training_count
# Only show positive count for new images (ignore deletions in the count display)
new_images_count = max(0, current_image_count - last_training_count)
training_metadata = {
"has_trained": True,
"last_training_date": metadata.get("last_training_date"),
"last_training_image_count": last_training_count,
"current_image_count": current_image_count,
"new_images_count": new_images_count,
"dataset_changed": dataset_changed,
}
return JSONResponse(
status_code=200,
content={
"categories": dataset_dict,
"training_metadata": training_metadata,
},
)
@router.get(
@ -662,12 +709,106 @@ def delete_classification_dataset_images(
if os.path.isfile(file_path):
os.unlink(file_path)
if os.path.exists(folder) and not os.listdir(folder):
os.rmdir(folder)
return JSONResponse(
content=({"success": True, "message": "Successfully deleted faces."}),
content=({"success": True, "message": "Successfully deleted images."}),
status_code=200,
)
@router.put(
"/classification/{name}/dataset/{old_category}/rename",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Rename a classification category",
description="""Renames a classification category for a given classification model.
The old category must exist and the new name must be valid. Returns a success message or an error if the name is invalid.""",
)
def rename_classification_category(
request: Request, name: str, old_category: str, body: dict = None
):
config: FrigateConfig = request.app.frigate_config
if name not in config.classification.custom:
return JSONResponse(
content=(
{
"success": False,
"message": f"{name} is not a known classification model.",
}
),
status_code=404,
)
json: dict[str, Any] = body or {}
new_category = sanitize_filename(json.get("new_category", ""))
if not new_category:
return JSONResponse(
content=(
{
"success": False,
"message": "New category name is required.",
}
),
status_code=400,
)
old_folder = os.path.join(
CLIPS_DIR, sanitize_filename(name), "dataset", sanitize_filename(old_category)
)
new_folder = os.path.join(
CLIPS_DIR, sanitize_filename(name), "dataset", new_category
)
if not os.path.exists(old_folder):
return JSONResponse(
content=(
{
"success": False,
"message": f"Category {old_category} does not exist.",
}
),
status_code=404,
)
if os.path.exists(new_folder):
return JSONResponse(
content=(
{
"success": False,
"message": f"Category {new_category} already exists.",
}
),
status_code=400,
)
try:
os.rename(old_folder, new_folder)
return JSONResponse(
content=(
{
"success": True,
"message": f"Successfully renamed category to {new_category}.",
}
),
status_code=200,
)
except Exception as e:
logger.error(f"Error renaming category: {e}")
return JSONResponse(
content=(
{
"success": False,
"message": "Failed to rename category",
}
),
status_code=500,
)
@router.post(
"/classification/{name}/dataset/categorize",
response_model=GenericResponse,
@ -723,7 +864,7 @@ def categorize_classification_image(request: Request, name: str, body: dict = No
os.unlink(training_file)
return JSONResponse(
content=({"success": True, "message": "Successfully deleted faces."}),
content=({"success": True, "message": "Successfully categorized image."}),
status_code=200,
)
@ -761,7 +902,7 @@ def delete_classification_train_images(request: Request, name: str, body: dict =
os.unlink(file_path)
return JSONResponse(
content=({"success": True, "message": "Successfully deleted faces."}),
content=({"success": True, "message": "Successfully deleted images."}),
status_code=200,
)
@ -812,31 +953,29 @@ async def generate_object_examples(request: Request, body: GenerateObjectExample
dependencies=[Depends(require_role(["admin"]))],
summary="Delete a classification model",
description="""Deletes a specific classification model and all its associated data.
The name must exist in the classification models. Returns a success message or an error if the name is invalid.""",
Works even if the model is not in the config (e.g., partially created during wizard).
Returns a success message.""",
)
def delete_classification_model(request: Request, name: str):
config: FrigateConfig = request.app.frigate_config
if name not in config.classification.custom:
return JSONResponse(
content=(
{
"success": False,
"message": f"{name} is not a known classification model.",
}
),
status_code=404,
)
sanitized_name = sanitize_filename(name)
# Delete the classification model's data directory in clips
data_dir = os.path.join(CLIPS_DIR, sanitize_filename(name))
data_dir = os.path.join(CLIPS_DIR, sanitized_name)
if os.path.exists(data_dir):
shutil.rmtree(data_dir)
try:
shutil.rmtree(data_dir)
logger.info(f"Deleted classification data directory for {name}")
except Exception as e:
logger.debug(f"Failed to delete data directory for {name}: {e}")
# Delete the classification model's files in model_cache
model_dir = os.path.join(MODEL_CACHE_DIR, sanitize_filename(name))
model_dir = os.path.join(MODEL_CACHE_DIR, sanitized_name)
if os.path.exists(model_dir):
shutil.rmtree(model_dir)
try:
shutil.rmtree(model_dir)
logger.info(f"Deleted classification model directory for {name}")
except Exception as e:
logger.debug(f"Failed to delete model directory for {name}: {e}")
return JSONResponse(
content=(

View File

@ -1781,9 +1781,8 @@ def create_trigger_embedding(
logger.debug(
f"Writing thumbnail for trigger with data {body.data} in {camera_name}."
)
except Exception as e:
logger.error(e.with_traceback())
logger.error(
except Exception:
logger.exception(
f"Failed to write thumbnail for trigger with data {body.data} in {camera_name}"
)
@ -1807,8 +1806,8 @@ def create_trigger_embedding(
status_code=200,
)
except Exception as e:
logger.error(e.with_traceback())
except Exception:
logger.exception("Error creating trigger embedding")
return JSONResponse(
content={
"success": False,
@ -1917,9 +1916,8 @@ def update_trigger_embedding(
logger.debug(
f"Deleted thumbnail for trigger with data {trigger.data} in {camera_name}."
)
except Exception as e:
logger.error(e.with_traceback())
logger.error(
except Exception:
logger.exception(
f"Failed to delete thumbnail for trigger with data {trigger.data} in {camera_name}"
)
@ -1958,9 +1956,8 @@ def update_trigger_embedding(
logger.debug(
f"Writing thumbnail for trigger with data {body.data} in {camera_name}."
)
except Exception as e:
logger.error(e.with_traceback())
logger.error(
except Exception:
logger.exception(
f"Failed to write thumbnail for trigger with data {body.data} in {camera_name}"
)
@ -1972,8 +1969,8 @@ def update_trigger_embedding(
status_code=200,
)
except Exception as e:
logger.error(e.with_traceback())
except Exception:
logger.exception("Error updating trigger embedding")
return JSONResponse(
content={
"success": False,
@ -2033,9 +2030,8 @@ def delete_trigger_embedding(
logger.debug(
f"Deleted thumbnail for trigger with data {trigger.data} in {camera_name}."
)
except Exception as e:
logger.error(e.with_traceback())
logger.error(
except Exception:
logger.exception(
f"Failed to delete thumbnail for trigger with data {trigger.data} in {camera_name}"
)
@ -2047,8 +2043,8 @@ def delete_trigger_embedding(
status_code=200,
)
except Exception as e:
logger.error(e.with_traceback())
except Exception:
logger.exception("Error deleting trigger embedding")
return JSONResponse(
content={
"success": False,

View File

@ -762,6 +762,15 @@ async def recording_clip(
.order_by(Recordings.start_time.asc())
)
if recordings.count() == 0:
return JSONResponse(
content={
"success": False,
"message": "No recordings found for the specified time range",
},
status_code=400,
)
file_name = sanitize_filename(f"playlist_{camera_name}_{start_ts}-{end_ts}.txt")
file_path = os.path.join(CACHE_DIR, file_name)
with open(file_path, "w") as file:

View File

@ -136,6 +136,7 @@ class CameraMaintainer(threading.Thread):
self.ptz_metrics[name],
self.region_grids[name],
self.stop_event,
self.config.logger,
)
self.camera_processes[config.name] = camera_process
camera_process.start()
@ -156,7 +157,11 @@ class CameraMaintainer(threading.Thread):
self.frame_manager.create(f"{config.name}_frame{i}", frame_size)
capture_process = CameraCapture(
config, count, self.camera_metrics[name], self.stop_event
config,
count,
self.camera_metrics[name],
self.stop_event,
self.config.logger,
)
capture_process.daemon = True
self.capture_processes[name] = capture_process

View File

@ -177,6 +177,12 @@ class CameraConfig(FrigateBaseModel):
def ffmpeg_cmds(self) -> list[dict[str, list[str]]]:
return self._ffmpeg_cmds
def get_formatted_name(self) -> str:
"""Return the friendly name if set, otherwise return a formatted version of the camera name."""
if self.friendly_name:
return self.friendly_name
return self.name.replace("_", " ").title() if self.name else ""
def create_ffmpeg_cmds(self):
if "_ffmpeg_cmds" in self:
return

View File

@ -140,10 +140,6 @@ Evaluate in this order:
The mere presence of an unidentified person in private areas during late night hours is inherently suspicious and warrants human review, regardless of what activity they appear to be doing or how brief the sequence is.""",
title="Custom activity context prompt defining normal and suspicious activity patterns for this property.",
)
camera_context: str = Field(
default="",
title="Spatial context about the camera's field of view to help with descriptive accuracy. Should describe physical features and locations outside the frame.",
)
class ReviewConfig(FrigateBaseModel):

View File

@ -13,6 +13,9 @@ logger = logging.getLogger(__name__)
class ZoneConfig(BaseModel):
friendly_name: Optional[str] = Field(
None, title="Zone friendly name used in the Frigate UI."
)
filters: dict[str, FilterConfig] = Field(
default_factory=dict, title="Zone filters."
)
@ -53,6 +56,12 @@ class ZoneConfig(BaseModel):
def contour(self) -> np.ndarray:
return self._contour
def get_formatted_name(self, zone_name: str) -> str:
"""Return the friendly name if set, otherwise return a formatted version of the zone name."""
if self.friendly_name:
return self.friendly_name
return zone_name.replace("_", " ").title()
@field_validator("objects", mode="before")
@classmethod
def validate_objects(cls, v):

View File

@ -4,7 +4,6 @@ import logging
import os
import sherpa_onnx
from faster_whisper.utils import download_model
from frigate.comms.inter_process import InterProcessRequestor
from frigate.const import MODEL_CACHE_DIR
@ -25,6 +24,9 @@ class AudioTranscriptionModelRunner:
if model_size == "large":
# use the Whisper download function instead of our own
# Import dynamically to avoid crashes on systems without AVX support
from faster_whisper.utils import download_model
logger.debug("Downloading Whisper audio transcription model")
download_model(
size_or_id="small" if device == "cuda" else "tiny",

View File

@ -6,10 +6,8 @@ import threading
import time
from typing import Optional
from faster_whisper import WhisperModel
from peewee import DoesNotExist
from frigate.comms.embeddings_updater import EmbeddingsRequestEnum
from frigate.comms.inter_process import InterProcessRequestor
from frigate.config import FrigateConfig
from frigate.const import (
@ -32,11 +30,13 @@ class AudioTranscriptionPostProcessor(PostProcessorApi):
self,
config: FrigateConfig,
requestor: InterProcessRequestor,
embeddings,
metrics: DataProcessorMetrics,
):
super().__init__(config, metrics, None)
self.config = config
self.requestor = requestor
self.embeddings = embeddings
self.recognizer = None
self.transcription_lock = threading.Lock()
self.transcription_thread = None
@ -50,6 +50,9 @@ class AudioTranscriptionPostProcessor(PostProcessorApi):
def __build_recognizer(self) -> None:
try:
# Import dynamically to avoid crashes on systems without AVX support
from faster_whisper import WhisperModel
self.recognizer = WhisperModel(
model_size_or_path="small",
device="cuda"
@ -128,10 +131,7 @@ class AudioTranscriptionPostProcessor(PostProcessorApi):
)
# Embed the description
self.requestor.send_data(
EmbeddingsRequestEnum.embed_description.value,
{"id": event_id, "description": transcription},
)
self.embeddings.embed_description(event_id, transcription)
except DoesNotExist:
logger.debug("No recording found for audio transcription post-processing")

View File

@ -16,6 +16,7 @@ from peewee import DoesNotExist
from frigate.comms.embeddings_updater import EmbeddingsRequestEnum
from frigate.comms.inter_process import InterProcessRequestor
from frigate.config import FrigateConfig
from frigate.config.camera import CameraConfig
from frigate.config.camera.review import GenAIReviewConfig, ImageSourceEnum
from frigate.const import CACHE_DIR, CLIPS_DIR, UPDATE_REVIEW_DESCRIPTION
from frigate.data_processing.types import PostProcessDataEnum
@ -30,6 +31,7 @@ from ..types import DataProcessorMetrics
logger = logging.getLogger(__name__)
RECORDING_BUFFER_EXTENSION_PERCENT = 0.10
MIN_RECORDING_DURATION = 10
class ReviewDescriptionProcessor(PostProcessorApi):
@ -90,7 +92,8 @@ class ReviewDescriptionProcessor(PostProcessorApi):
pixels_per_image = width * height
tokens_per_image = pixels_per_image / 1250
prompt_tokens = 3500
available_tokens = context_size * 0.98 - prompt_tokens
response_tokens = 300
available_tokens = context_size - prompt_tokens - response_tokens
max_frames = int(available_tokens / tokens_per_image)
return min(max(max_frames, 3), 20)
@ -129,7 +132,15 @@ class ReviewDescriptionProcessor(PostProcessorApi):
if image_source == ImageSourceEnum.recordings:
duration = final_data["end_time"] - final_data["start_time"]
buffer_extension = duration * RECORDING_BUFFER_EXTENSION_PERCENT
buffer_extension = min(5, duration * RECORDING_BUFFER_EXTENSION_PERCENT)
# Ensure minimum total duration for short review items
# This provides better context for brief events
total_duration = duration + (2 * buffer_extension)
if total_duration < MIN_RECORDING_DURATION:
# Expand buffer to reach minimum duration, still respecting max of 5s per side
additional_buffer_per_side = (MIN_RECORDING_DURATION - duration) / 2
buffer_extension = min(5, additional_buffer_per_side)
thumbs = self.get_recording_frames(
camera,
@ -181,7 +192,7 @@ class ReviewDescriptionProcessor(PostProcessorApi):
self.requestor,
self.genai_client,
self.review_desc_speed,
camera,
camera_config,
final_data,
thumbs,
camera_config.review.genai,
@ -410,7 +421,7 @@ def run_analysis(
requestor: InterProcessRequestor,
genai_client: GenAIClient,
review_inference_speed: InferenceSpeed,
camera: str,
camera_config: CameraConfig,
final_data: dict[str, str],
thumbs: list[bytes],
genai_config: GenAIReviewConfig,
@ -418,10 +429,19 @@ def run_analysis(
attribute_labels: list[str],
) -> None:
start = datetime.datetime.now().timestamp()
# Format zone names using zone config friendly names if available
formatted_zones = []
for zone_name in final_data["data"]["zones"]:
if zone_name in camera_config.zones:
formatted_zones.append(
camera_config.zones[zone_name].get_formatted_name(zone_name)
)
analytics_data = {
"id": final_data["id"],
"camera": camera,
"zones": final_data["data"]["zones"],
"camera": camera_config.get_formatted_name(),
"zones": formatted_zones,
"start": datetime.datetime.fromtimestamp(final_data["start_time"]).strftime(
"%A, %I:%M %p"
),
@ -458,7 +478,6 @@ def run_analysis(
genai_config.preferred_language,
genai_config.debug_save_thumbnails,
genai_config.activity_context_prompt,
genai_config.camera_context,
)
review_inference_speed.update(datetime.datetime.now().timestamp() - start)

View File

@ -227,6 +227,9 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
self.tensor_output_details[0]["index"]
)[0]
probs = res / res.sum(axis=0)
logger.debug(
f"{self.model_config.name} Ran state classification with probabilities: {probs}"
)
best_id = np.argmax(probs)
score = round(probs[best_id], 2)
self.__update_metrics(datetime.datetime.now().timestamp() - now)
@ -418,8 +421,8 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
obj_data["box"][2],
obj_data["box"][3],
max(
obj_data["box"][1] - obj_data["box"][0],
obj_data["box"][3] - obj_data["box"][2],
obj_data["box"][2] - obj_data["box"][0],
obj_data["box"][3] - obj_data["box"][1],
),
1.0,
)
@ -455,6 +458,9 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
self.tensor_output_details[0]["index"]
)[0]
probs = res / res.sum(axis=0)
logger.debug(
f"{self.model_config.name} Ran object classification with probabilities: {probs}"
)
best_id = np.argmax(probs)
score = round(probs[best_id], 2)
self.__update_metrics(datetime.datetime.now().timestamp() - now)
@ -546,5 +552,8 @@ def write_classification_attempt(
)
# delete oldest face image if maximum is reached
if len(files) > max_files:
os.unlink(os.path.join(folder, files[-1]))
try:
if len(files) > max_files:
os.unlink(os.path.join(folder, files[-1]))
except FileNotFoundError:
pass

View File

@ -423,7 +423,10 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
res = self.recognizer.classify(img)
if not res:
return
return {
"message": "Model is still training, please try again in a few moments.",
"success": False,
}
sub_label, score = res
@ -442,6 +445,13 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
)
shutil.move(current_file, new_file)
return {
"message": f"Successfully reprocessed face. Result: {sub_label} (score: {score:.2f})",
"success": True,
"face_name": sub_label,
"score": score,
}
def expire_object(self, object_id: str, camera: str):
if object_id in self.person_face_history:
self.person_face_history.pop(object_id)

View File

@ -3,6 +3,7 @@
import logging
import os
import platform
import threading
from abc import ABC, abstractmethod
from typing import Any
@ -161,12 +162,12 @@ class CudaGraphRunner(BaseModelRunner):
"""
@staticmethod
def is_complex_model(model_type: str) -> bool:
def is_model_supported(model_type: str) -> bool:
# Import here to avoid circular imports
from frigate.detectors.detector_config import ModelTypeEnum
from frigate.embeddings.types import EnrichmentModelTypeEnum
return model_type in [
return model_type not in [
ModelTypeEnum.yolonas.value,
EnrichmentModelTypeEnum.paddleocr.value,
EnrichmentModelTypeEnum.jina_v1.value,
@ -239,9 +240,31 @@ class OpenVINOModelRunner(BaseModelRunner):
EnrichmentModelTypeEnum.jina_v2.value,
]
@staticmethod
def is_model_npu_supported(model_type: str) -> bool:
# Import here to avoid circular imports
from frigate.embeddings.types import EnrichmentModelTypeEnum
return model_type not in [
EnrichmentModelTypeEnum.paddleocr.value,
EnrichmentModelTypeEnum.jina_v1.value,
EnrichmentModelTypeEnum.jina_v2.value,
EnrichmentModelTypeEnum.arcface.value,
]
def __init__(self, model_path: str, device: str, model_type: str, **kwargs):
self.model_path = model_path
self.device = device
self.model_type = model_type
if device == "NPU" and not OpenVINOModelRunner.is_model_npu_supported(
model_type
):
logger.warning(
f"OpenVINO model {model_type} is not supported on NPU, using GPU instead"
)
device = "GPU"
self.complex_model = OpenVINOModelRunner.is_complex_model(model_type)
if not os.path.isfile(model_path):
@ -269,6 +292,10 @@ class OpenVINOModelRunner(BaseModelRunner):
self.infer_request = self.compiled_model.create_infer_request()
self.input_tensor: ov.Tensor | None = None
# Thread lock to prevent concurrent inference (needed for JinaV2 which shares
# one runner between text and vision embeddings called from different threads)
self._inference_lock = threading.Lock()
if not self.complex_model:
try:
input_shape = self.compiled_model.inputs[0].get_shape()
@ -312,67 +339,81 @@ class OpenVINOModelRunner(BaseModelRunner):
Returns:
List of output tensors
"""
# Handle single input case for backward compatibility
if (
len(inputs) == 1
and len(self.compiled_model.inputs) == 1
and self.input_tensor is not None
):
# Single input case - use the pre-allocated tensor for efficiency
input_data = list(inputs.values())[0]
np.copyto(self.input_tensor.data, input_data)
self.infer_request.infer(self.input_tensor)
else:
if self.complex_model:
# Lock prevents concurrent access to infer_request
# Needed for JinaV2: genai thread (text) + embeddings thread (vision)
with self._inference_lock:
from frigate.embeddings.types import EnrichmentModelTypeEnum
if self.model_type in [EnrichmentModelTypeEnum.arcface.value]:
# For face recognition models, create a fresh infer_request
# for each inference to avoid state pollution that causes incorrect results.
self.infer_request = self.compiled_model.create_infer_request()
# Handle single input case for backward compatibility
if (
len(inputs) == 1
and len(self.compiled_model.inputs) == 1
and self.input_tensor is not None
):
# Single input case - use the pre-allocated tensor for efficiency
input_data = list(inputs.values())[0]
np.copyto(self.input_tensor.data, input_data)
self.infer_request.infer(self.input_tensor)
else:
if self.complex_model:
try:
# This ensures the model starts with a clean state for each sequence
# Important for RNN models like PaddleOCR recognition
self.infer_request.reset_state()
except Exception:
# this will raise an exception for models with AUTO set as the device
pass
# Multiple inputs case - set each input by name
for input_name, input_data in inputs.items():
# Find the input by name and its index
input_port = None
input_index = None
for idx, port in enumerate(self.compiled_model.inputs):
if port.get_any_name() == input_name:
input_port = port
input_index = idx
break
if input_port is None:
raise ValueError(f"Input '{input_name}' not found in model")
# Create tensor with the correct element type
input_element_type = input_port.get_element_type()
# Ensure input data matches the expected dtype to prevent type mismatches
# that can occur with models like Jina-CLIP v2 running on OpenVINO
expected_dtype = input_element_type.to_dtype()
if input_data.dtype != expected_dtype:
logger.debug(
f"Converting input '{input_name}' from {input_data.dtype} to {expected_dtype}"
)
input_data = input_data.astype(expected_dtype)
input_tensor = ov.Tensor(input_element_type, input_data.shape)
np.copyto(input_tensor.data, input_data)
# Set the input tensor for the specific port index
self.infer_request.set_input_tensor(input_index, input_tensor)
# Run inference
try:
# This ensures the model starts with a clean state for each sequence
# Important for RNN models like PaddleOCR recognition
self.infer_request.reset_state()
except Exception:
# this will raise an exception for models with AUTO set as the device
pass
self.infer_request.infer()
except Exception as e:
logger.error(f"Error during OpenVINO inference: {e}")
return []
# Multiple inputs case - set each input by name
for input_name, input_data in inputs.items():
# Find the input by name and its index
input_port = None
input_index = None
for idx, port in enumerate(self.compiled_model.inputs):
if port.get_any_name() == input_name:
input_port = port
input_index = idx
break
# Get all output tensors
outputs = []
for i in range(len(self.compiled_model.outputs)):
outputs.append(self.infer_request.get_output_tensor(i).data)
if input_port is None:
raise ValueError(f"Input '{input_name}' not found in model")
# Create tensor with the correct element type
input_element_type = input_port.get_element_type()
# Ensure input data matches the expected dtype to prevent type mismatches
# that can occur with models like Jina-CLIP v2 running on OpenVINO
expected_dtype = input_element_type.to_dtype()
if input_data.dtype != expected_dtype:
logger.debug(
f"Converting input '{input_name}' from {input_data.dtype} to {expected_dtype}"
)
input_data = input_data.astype(expected_dtype)
input_tensor = ov.Tensor(input_element_type, input_data.shape)
np.copyto(input_tensor.data, input_data)
# Set the input tensor for the specific port index
self.infer_request.set_input_tensor(input_index, input_tensor)
# Run inference
self.infer_request.infer()
# Get all output tensors
outputs = []
for i in range(len(self.compiled_model.outputs)):
outputs.append(self.infer_request.get_output_tensor(i).data)
return outputs
return outputs
class RKNNModelRunner(BaseModelRunner):
@ -500,7 +541,7 @@ def get_optimized_runner(
return OpenVINOModelRunner(model_path, device, model_type, **kwargs)
if (
not CudaGraphRunner.is_complex_model(model_type)
CudaGraphRunner.is_model_supported(model_type)
and providers[0] == "CUDAExecutionProvider"
):
options[0] = {

View File

@ -472,7 +472,7 @@ class Embeddings:
)
thumbnail_missing = True
except DoesNotExist:
logger.warning(
logger.debug(
f"Event ID {trigger.data} for trigger {trigger_name} does not exist."
)
continue

View File

@ -226,7 +226,9 @@ class EmbeddingMaintainer(threading.Thread):
for c in self.config.cameras.values()
):
self.post_processors.append(
AudioTranscriptionPostProcessor(self.config, self.requestor, metrics)
AudioTranscriptionPostProcessor(
self.config, self.requestor, self.embeddings, metrics
)
)
semantic_trigger_processor: SemanticTriggerProcessor | None = None

View File

@ -45,15 +45,13 @@ class GenAIClient:
preferred_language: str | None,
debug_save: bool,
activity_context_prompt: str,
camera_context: str = "",
) -> ReviewMetadata | None:
"""Generate a description for the review item activity."""
def get_concern_prompt() -> str:
if concerns:
concern_list = "\n - ".join(concerns)
return f"""
- `other_concerns` (list of strings): Include a list of any of the following concerns that are occurring:
return f"""- `other_concerns` (list of strings): Include a list of any of the following concerns that are occurring:
- {concern_list}"""
else:
return ""
@ -70,25 +68,13 @@ class GenAIClient:
else:
return "\n- (No objects detected)"
def get_camera_context_section() -> str:
if camera_context:
return f"""## Camera Spatial Context
Use this spatial information when writing the title and scene description to provide more accurate context about where activity is occurring or where people/objects are moving to/from.
{camera_context}"""
return ""
camera_context_section = get_camera_context_section()
context_prompt = f"""
Your task is to analyze the sequence of images ({len(thumbnails)} total) taken in chronological order from the perspective of the {review_data["camera"].replace("_", " ")} security camera.
Your task is to analyze the sequence of images ({len(thumbnails)} total) taken in chronological order from the perspective of the {review_data["camera"]} security camera.
## Normal Activity Patterns for This Property
{activity_context_prompt}
{camera_context_section}
## Task Instructions
Your task is to provide a clear, accurate description of the scene that:
@ -113,8 +99,8 @@ When forming your description:
## Response Format
Your response MUST be a flat JSON object with:
- `title` (string): A concise, direct title that describes the purpose or overall action, not just what you literally see. {"Use spatial context when available to make titles more meaningful." if camera_context_section else ""} Use names from "Objects in Scene" based on what you visually observe. If you see both a name and an unidentified object of the same type but visually observe only one person/object, use ONLY the name. Examples: "Joe walking dog", "Person taking out trash", "Joe accessing vehicle", "Person leaving porch for driveway", "Joe and person on front porch".
- `scene` (string): A narrative description of what happens across the sequence from start to finish. **Only describe actions you can actually observe happening in the frames provided.** Do not infer or assume actions that aren't visible (e.g., if you see someone walking but never see them sit, don't say they sat down). Include setting, detected objects, and their observable actions. Avoid speculation or filling in assumed behaviors. Your description should align with and support the threat level you assign.
- `title` (string): A concise, direct title that describes the primary action or event in the sequence, not just what you literally see. Use spatial context when available to make titles more meaningful. When multiple objects/actions are present, prioritize whichever is most prominent or occurs first. Use names from "Objects in Scene" based on what you visually observe. If you see both a name and an unidentified object of the same type but visually observe only one person/object, use ONLY the name. Examples: "Joe walking dog", "Person taking out trash", "Vehicle arriving in driveway", "Joe accessing vehicle", "Person leaving porch for driveway".
- `scene` (string): A narrative description of what happens across the sequence from start to finish, in chronological order. Start by describing how the sequence begins, then describe the progression of events. **Describe all significant movements and actions in the order they occur.** For example, if a vehicle arrives and then a person exits, describe both actions sequentially. **Only describe actions you can actually observe happening in the frames provided.** Do not infer or assume actions that aren't visible (e.g., if you see someone walking but never see them sit, don't say they sat down). Include setting, detected objects, and their observable actions. Avoid speculation or filling in assumed behaviors. Your description should align with and support the threat level you assign.
- `confidence` (float): 0-1 confidence in your analysis. Higher confidence when objects/actions are clearly visible and context is unambiguous. Lower confidence when the sequence is unclear, objects are partially obscured, or context is ambiguous.
- `potential_threat_level` (integer): 0, 1, or 2 as defined in "Normal Activity Patterns for This Property" above. Your threat level must be consistent with your scene description and the guidance above.
{get_concern_prompt()}
@ -123,7 +109,7 @@ Your response MUST be a flat JSON object with:
- Frame 1 = earliest, Frame {len(thumbnails)} = latest
- Activity started at {review_data["start"]} and lasted {review_data["duration"]} seconds
- Zones involved: {", ".join(z.replace("_", " ").title() for z in review_data["zones"]) or "None"}
- Zones involved: {", ".join(review_data["zones"]) if review_data["zones"] else "None"}
## Objects in Scene

View File

@ -407,6 +407,19 @@ class ReviewSegmentMaintainer(threading.Thread):
segment.last_detection_time = frame_time
for object in activity.get_all_objects():
# Alert-level objects should always be added (they extend/upgrade the segment)
# Detection-level objects should only be added if:
# - The segment is a detection segment (matching severity), OR
# - The segment is an alert AND the object started before the alert ended
# (objects starting after will be in the new detection segment)
is_alert_object = object in activity.categorized_objects["alerts"]
if not is_alert_object and segment.severity == SeverityEnum.alert:
# This is a detection-level object
# Only add if it started during the alert's active period
if object["start_time"] > segment.last_alert_time:
continue
if not object["sub_label"]:
segment.detections[object["id"]] = object["label"]
elif object["sub_label"][0] in self.config.model.all_attributes:

View File

@ -113,6 +113,7 @@ class StorageMaintainer(threading.Thread):
recordings: Recordings = (
Recordings.select(
Recordings.id,
Recordings.camera,
Recordings.start_time,
Recordings.end_time,
Recordings.segment_size,
@ -137,7 +138,7 @@ class StorageMaintainer(threading.Thread):
)
event_start = 0
deleted_recordings = set()
deleted_recordings = []
for recording in recordings:
# check if 1 hour of storage has been reclaimed
if deleted_segments_size > hourly_bandwidth:
@ -172,7 +173,7 @@ class StorageMaintainer(threading.Thread):
if not keep:
try:
clear_and_unlink(Path(recording.path), missing_ok=False)
deleted_recordings.add(recording.id)
deleted_recordings.append(recording)
deleted_segments_size += recording.segment_size
except FileNotFoundError:
# this file was not found so we must assume no space was cleaned up
@ -186,6 +187,9 @@ class StorageMaintainer(threading.Thread):
recordings = (
Recordings.select(
Recordings.id,
Recordings.camera,
Recordings.start_time,
Recordings.end_time,
Recordings.path,
Recordings.segment_size,
)
@ -201,7 +205,7 @@ class StorageMaintainer(threading.Thread):
try:
clear_and_unlink(Path(recording.path), missing_ok=False)
deleted_segments_size += recording.segment_size
deleted_recordings.add(recording.id)
deleted_recordings.append(recording)
except FileNotFoundError:
# this file was not found so we must assume no space was cleaned up
pass
@ -211,7 +215,50 @@ class StorageMaintainer(threading.Thread):
logger.debug(f"Expiring {len(deleted_recordings)} recordings")
# delete up to 100,000 at a time
max_deletes = 100000
deleted_recordings_list = list(deleted_recordings)
# Update has_clip for events that overlap with deleted recordings
if deleted_recordings:
# Group deleted recordings by camera
camera_recordings = {}
for recording in deleted_recordings:
if recording.camera not in camera_recordings:
camera_recordings[recording.camera] = {
"min_start": recording.start_time,
"max_end": recording.end_time,
}
else:
camera_recordings[recording.camera]["min_start"] = min(
camera_recordings[recording.camera]["min_start"],
recording.start_time,
)
camera_recordings[recording.camera]["max_end"] = max(
camera_recordings[recording.camera]["max_end"],
recording.end_time,
)
# Find all events that overlap with deleted recordings time range per camera
events_to_update = []
for camera, time_range in camera_recordings.items():
overlapping_events = Event.select(Event.id).where(
Event.camera == camera,
Event.has_clip == True,
Event.start_time < time_range["max_end"],
Event.end_time > time_range["min_start"],
)
for event in overlapping_events:
events_to_update.append(event.id)
# Update has_clip to False for overlapping events
if events_to_update:
for i in range(0, len(events_to_update), max_deletes):
batch = events_to_update[i : i + max_deletes]
Event.update(has_clip=False).where(Event.id << batch).execute()
logger.debug(
f"Updated has_clip to False for {len(events_to_update)} events"
)
deleted_recordings_list = [r.id for r in deleted_recordings]
for i in range(0, len(deleted_recordings_list), max_deletes):
Recordings.delete().where(
Recordings.id << deleted_recordings_list[i : i + max_deletes]

View File

@ -23,6 +23,7 @@ class ModelStatusTypesEnum(str, Enum):
error = "error"
training = "training"
complete = "complete"
failed = "failed"
class TrackedObjectUpdateTypesEnum(str, Enum):

View File

@ -1,5 +1,7 @@
"""Util for classification models."""
import datetime
import json
import logging
import os
import random
@ -27,10 +29,96 @@ from frigate.util.process import FrigateProcess
BATCH_SIZE = 16
EPOCHS = 50
LEARNING_RATE = 0.001
TRAINING_METADATA_FILE = ".training_metadata.json"
logger = logging.getLogger(__name__)
def write_training_metadata(model_name: str, image_count: int) -> None:
"""
Write training metadata to a hidden file in the model's clips directory.
Args:
model_name: Name of the classification model
image_count: Number of images used in training
"""
clips_model_dir = os.path.join(CLIPS_DIR, model_name)
os.makedirs(clips_model_dir, exist_ok=True)
metadata_path = os.path.join(clips_model_dir, TRAINING_METADATA_FILE)
metadata = {
"last_training_date": datetime.datetime.now().isoformat(),
"last_training_image_count": image_count,
}
try:
with open(metadata_path, "w") as f:
json.dump(metadata, f, indent=2)
logger.info(f"Wrote training metadata for {model_name}: {image_count} images")
except Exception as e:
logger.error(f"Failed to write training metadata for {model_name}: {e}")
def read_training_metadata(model_name: str) -> dict[str, any] | None:
"""
Read training metadata from the hidden file in the model's clips directory.
Args:
model_name: Name of the classification model
Returns:
Dictionary with last_training_date and last_training_image_count, or None if not found
"""
clips_model_dir = os.path.join(CLIPS_DIR, model_name)
metadata_path = os.path.join(clips_model_dir, TRAINING_METADATA_FILE)
if not os.path.exists(metadata_path):
return None
try:
with open(metadata_path, "r") as f:
metadata = json.load(f)
return metadata
except Exception as e:
logger.error(f"Failed to read training metadata for {model_name}: {e}")
return None
def get_dataset_image_count(model_name: str) -> int:
"""
Count the total number of images in the model's dataset directory.
Args:
model_name: Name of the classification model
Returns:
Total count of images across all categories
"""
dataset_dir = os.path.join(CLIPS_DIR, model_name, "dataset")
if not os.path.exists(dataset_dir):
return 0
total_count = 0
try:
for category in os.listdir(dataset_dir):
category_dir = os.path.join(dataset_dir, category)
if not os.path.isdir(category_dir):
continue
image_files = [
f
for f in os.listdir(category_dir)
if f.lower().endswith((".webp", ".png", ".jpg", ".jpeg"))
]
total_count += len(image_files)
except Exception as e:
logger.error(f"Failed to count dataset images for {model_name}: {e}")
return 0
return total_count
class ClassificationTrainingProcess(FrigateProcess):
def __init__(self, model_name: str) -> None:
super().__init__(
@ -42,7 +130,8 @@ class ClassificationTrainingProcess(FrigateProcess):
def run(self) -> None:
self.pre_run_setup()
self.__train_classification_model()
success = self.__train_classification_model()
exit(0 if success else 1)
def __generate_representative_dataset_factory(self, dataset_dir: str):
def generate_representative_dataset():
@ -65,85 +154,117 @@ class ClassificationTrainingProcess(FrigateProcess):
@redirect_output_to_logger(logger, logging.DEBUG)
def __train_classification_model(self) -> bool:
"""Train a classification model."""
try:
# import in the function so that tensorflow is not initialized multiple times
import tensorflow as tf
from tensorflow.keras import layers, models, optimizers
from tensorflow.keras.applications import MobileNetV2
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# import in the function so that tensorflow is not initialized multiple times
import tensorflow as tf
from tensorflow.keras import layers, models, optimizers
from tensorflow.keras.applications import MobileNetV2
from tensorflow.keras.preprocessing.image import ImageDataGenerator
dataset_dir = os.path.join(CLIPS_DIR, self.model_name, "dataset")
model_dir = os.path.join(MODEL_CACHE_DIR, self.model_name)
os.makedirs(model_dir, exist_ok=True)
logger.info(f"Kicking off classification training for {self.model_name}.")
dataset_dir = os.path.join(CLIPS_DIR, self.model_name, "dataset")
model_dir = os.path.join(MODEL_CACHE_DIR, self.model_name)
os.makedirs(model_dir, exist_ok=True)
num_classes = len(
[
d
for d in os.listdir(dataset_dir)
if os.path.isdir(os.path.join(dataset_dir, d))
]
)
num_classes = len(
[
d
for d in os.listdir(dataset_dir)
if os.path.isdir(os.path.join(dataset_dir, d))
]
)
# Start with imagenet base model with 35% of channels in each layer
base_model = MobileNetV2(
input_shape=(224, 224, 3),
include_top=False,
weights="imagenet",
alpha=0.35,
)
base_model.trainable = False # Freeze pre-trained layers
if num_classes < 2:
logger.error(
f"Training failed for {self.model_name}: Need at least 2 classes, found {num_classes}"
)
return False
model = models.Sequential(
[
base_model,
layers.GlobalAveragePooling2D(),
layers.Dense(128, activation="relu"),
layers.Dropout(0.3),
layers.Dense(num_classes, activation="softmax"),
]
)
# Start with imagenet base model with 35% of channels in each layer
base_model = MobileNetV2(
input_shape=(224, 224, 3),
include_top=False,
weights="imagenet",
alpha=0.35,
)
base_model.trainable = False # Freeze pre-trained layers
model.compile(
optimizer=optimizers.Adam(learning_rate=LEARNING_RATE),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
model = models.Sequential(
[
base_model,
layers.GlobalAveragePooling2D(),
layers.Dense(128, activation="relu"),
layers.Dropout(0.3),
layers.Dense(num_classes, activation="softmax"),
]
)
# create training set
datagen = ImageDataGenerator(rescale=1.0 / 255, validation_split=0.2)
train_gen = datagen.flow_from_directory(
dataset_dir,
target_size=(224, 224),
batch_size=BATCH_SIZE,
class_mode="categorical",
subset="training",
)
model.compile(
optimizer=optimizers.Adam(learning_rate=LEARNING_RATE),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
# write labelmap
class_indices = train_gen.class_indices
index_to_class = {v: k for k, v in class_indices.items()}
sorted_classes = [index_to_class[i] for i in range(len(index_to_class))]
with open(os.path.join(model_dir, "labelmap.txt"), "w") as f:
for class_name in sorted_classes:
f.write(f"{class_name}\n")
# create training set
datagen = ImageDataGenerator(rescale=1.0 / 255, validation_split=0.2)
train_gen = datagen.flow_from_directory(
dataset_dir,
target_size=(224, 224),
batch_size=BATCH_SIZE,
class_mode="categorical",
subset="training",
)
# train the model
model.fit(train_gen, epochs=EPOCHS, verbose=0)
total_images = train_gen.samples
logger.debug(
f"Training {self.model_name}: {total_images} images across {num_classes} classes"
)
# convert model to tflite
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = (
self.__generate_representative_dataset_factory(dataset_dir)
)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model = converter.convert()
# write labelmap
class_indices = train_gen.class_indices
index_to_class = {v: k for k, v in class_indices.items()}
sorted_classes = [index_to_class[i] for i in range(len(index_to_class))]
with open(os.path.join(model_dir, "labelmap.txt"), "w") as f:
for class_name in sorted_classes:
f.write(f"{class_name}\n")
# write model
with open(os.path.join(model_dir, "model.tflite"), "wb") as f:
f.write(tflite_model)
# train the model
logger.debug(f"Training {self.model_name} for {EPOCHS} epochs...")
model.fit(train_gen, epochs=EPOCHS, verbose=0)
logger.debug(f"Converting {self.model_name} to TFLite...")
# convert model to tflite
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = (
self.__generate_representative_dataset_factory(dataset_dir)
)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model = converter.convert()
# write model
model_path = os.path.join(model_dir, "model.tflite")
with open(model_path, "wb") as f:
f.write(tflite_model)
# verify model file was written successfully
if not os.path.exists(model_path) or os.path.getsize(model_path) == 0:
logger.error(
f"Training failed for {self.model_name}: Model file was not created or is empty"
)
return False
# write training metadata with image count
dataset_image_count = get_dataset_image_count(self.model_name)
write_training_metadata(self.model_name, dataset_image_count)
logger.info(f"Finished training {self.model_name}")
return True
except Exception as e:
logger.error(f"Training failed for {self.model_name}: {e}", exc_info=True)
return False
def kickoff_model_training(
@ -165,18 +286,36 @@ def kickoff_model_training(
training_process.start()
training_process.join()
# reload model and mark training as complete
embeddingRequestor.send_data(
EmbeddingsRequestEnum.reload_classification_model.value,
{"model_name": model_name},
)
requestor.send_data(
UPDATE_MODEL_STATE,
{
"model": model_name,
"state": ModelStatusTypesEnum.complete,
},
)
# check if training succeeded by examining the exit code
training_success = training_process.exitcode == 0
if training_success:
# reload model and mark training as complete
embeddingRequestor.send_data(
EmbeddingsRequestEnum.reload_classification_model.value,
{"model_name": model_name},
)
requestor.send_data(
UPDATE_MODEL_STATE,
{
"model": model_name,
"state": ModelStatusTypesEnum.complete,
},
)
else:
logger.error(
f"Training subprocess failed for {model_name} (exit code: {training_process.exitcode})"
)
# mark training as failed so UI shows error state
# don't reload the model since it failed
requestor.send_data(
UPDATE_MODEL_STATE,
{
"model": model_name,
"state": ModelStatusTypesEnum.failed,
},
)
requestor.stop()

View File

@ -369,6 +369,10 @@ def get_ort_providers(
"enable_cpu_mem_arena": False,
}
)
elif provider == "AzureExecutionProvider":
# Skip Azure provider - not typically available on local hardware
# and prevents fallback to OpenVINO when it's the first provider
continue
else:
providers.append(provider)
options.append({})

View File

@ -16,7 +16,7 @@ from frigate.comms.recordings_updater import (
RecordingsDataSubscriber,
RecordingsDataTypeEnum,
)
from frigate.config import CameraConfig, DetectConfig, ModelConfig
from frigate.config import CameraConfig, DetectConfig, LoggerConfig, ModelConfig
from frigate.config.camera.camera import CameraTypeEnum
from frigate.config.camera.updater import (
CameraConfigUpdateEnum,
@ -539,6 +539,7 @@ class CameraCapture(FrigateProcess):
shm_frame_count: int,
camera_metrics: CameraMetrics,
stop_event: MpEvent,
log_config: LoggerConfig | None = None,
) -> None:
super().__init__(
stop_event,
@ -549,9 +550,10 @@ class CameraCapture(FrigateProcess):
self.config = config
self.shm_frame_count = shm_frame_count
self.camera_metrics = camera_metrics
self.log_config = log_config
def run(self) -> None:
self.pre_run_setup()
self.pre_run_setup(self.log_config)
camera_watchdog = CameraWatchdog(
self.config,
self.shm_frame_count,
@ -577,6 +579,7 @@ class CameraTracker(FrigateProcess):
ptz_metrics: PTZMetrics,
region_grid: list[list[dict[str, Any]]],
stop_event: MpEvent,
log_config: LoggerConfig | None = None,
) -> None:
super().__init__(
stop_event,
@ -592,9 +595,10 @@ class CameraTracker(FrigateProcess):
self.camera_metrics = camera_metrics
self.ptz_metrics = ptz_metrics
self.region_grid = region_grid
self.log_config = log_config
def run(self) -> None:
self.pre_run_setup()
self.pre_run_setup(self.log_config)
frame_queue = self.camera_metrics.frame_queue
frame_shape = self.config.frame_shape

1
web/.gitignore vendored
View File

@ -22,3 +22,4 @@ dist-ssr
*.njsproj
*.sln
*.sw?
.env

View File

@ -72,7 +72,10 @@
"formattedTimestampFilename": {
"12hour": "MM-dd-yy-h-mm-ss-a",
"24hour": "MM-dd-yy-HH-mm-ss"
}
},
"inProgress": "In progress",
"invalidStartTime": "Invalid start time",
"invalidEndTime": "Invalid end time"
},
"unit": {
"speed": {
@ -96,7 +99,9 @@
"back": "Go back",
"hide": "Hide {{item}}",
"show": "Show {{item}}",
"ID": "ID"
"ID": "ID",
"none": "None",
"all": "All"
},
"list": {
"two": "{{0}} and {{1}}",
@ -142,7 +147,8 @@
"unselect": "Unselect",
"export": "Export",
"deleteNow": "Delete Now",
"next": "Next"
"next": "Next",
"continue": "Continue"
},
"menu": {
"system": "System",
@ -235,6 +241,7 @@
"export": "Export",
"uiPlayground": "UI Playground",
"faceLibrary": "Face Library",
"classification": "Classification",
"user": {
"title": "User",
"account": "Account",

View File

@ -67,9 +67,6 @@
},
"activity_context_prompt": {
"label": "Custom activity context prompt defining normal activity patterns for this property."
},
"camera_context": {
"label": "Spatial context about the camera's field of view to help with descriptive accuracy. Should describe physical features and locations outside the frame. This is for spatial reference only and should NOT include subjective assessments."
}
}
}

View File

@ -13,6 +13,12 @@
"deleteModels": "Delete Models",
"editModel": "Edit Model"
},
"tooltip": {
"trainingInProgress": "Model is currently training",
"noNewImages": "No new images to train. Classify more images in the dataset first.",
"noChanges": "No changes to the dataset since last training.",
"modelNotReady": "Model is not ready for training"
},
"toast": {
"success": {
"deletedCategory": "Deleted Class",
@ -22,20 +28,25 @@
"categorizedImage": "Successfully Classified Image",
"trainedModel": "Successfully trained model.",
"trainingModel": "Successfully started model training.",
"updatedModel": "Successfully updated model configuration"
"updatedModel": "Successfully updated model configuration",
"renamedCategory": "Successfully renamed class to {{name}}"
},
"error": {
"deleteImageFailed": "Failed to delete: {{errorMessage}}",
"deleteCategoryFailed": "Failed to delete class: {{errorMessage}}",
"deleteModelFailed": "Failed to delete model: {{errorMessage}}",
"categorizeFailed": "Failed to categorize image: {{errorMessage}}",
"trainingFailed": "Failed to start model training: {{errorMessage}}",
"updateModelFailed": "Failed to update model: {{errorMessage}}"
"trainingFailed": "Model training failed. Check Frigate logs for details.",
"trainingFailedToStart": "Failed to start model training: {{errorMessage}}",
"updateModelFailed": "Failed to update model: {{errorMessage}}",
"renameCategoryFailed": "Failed to rename class: {{errorMessage}}"
}
},
"deleteCategory": {
"title": "Delete Class",
"desc": "Are you sure you want to delete the class {{name}}? This will permanently delete all associated images and require re-training the model."
"desc": "Are you sure you want to delete the class {{name}}? This will permanently delete all associated images and require re-training the model.",
"minClassesTitle": "Cannot Delete Class",
"minClassesDesc": "A classification model must have at least 2 classes. Add another class before deleting this one."
},
"deleteModel": {
"title": "Delete Classification Model",
@ -141,6 +152,8 @@
"step3": {
"selectImagesPrompt": "Select all images with: {{className}}",
"selectImagesDescription": "Click on images to select them. Click Continue when you're done with this class.",
"allImagesRequired_one": "Please classify all images. {{count}} image remaining.",
"allImagesRequired_other": "Please classify all images. {{count}} images remaining.",
"generating": {
"title": "Generating Sample Images",
"description": "Frigate is pulling representative images from your recordings. This may take a moment..."

View File

@ -24,8 +24,8 @@
"label": "Detail",
"noDataFound": "No detail data to review",
"aria": "Toggle detail view",
"trackedObject_one": "object",
"trackedObject_other": "objects",
"trackedObject_one": "{{count}} object",
"trackedObject_other": "{{count}} objects",
"noObjectDetailData": "No object detail data available.",
"settings": "Detail View Settings",
"alwaysExpandActive": {

View File

@ -35,7 +35,7 @@
"snapshot": "snapshot",
"thumbnail": "thumbnail",
"video": "video",
"object_lifecycle": "object lifecycle"
"tracking_details": "tracking details"
},
"trackingDetails": {
"title": "Tracking Details",

View File

@ -75,7 +75,7 @@
"deletedName_other": "{{count}} faces have been successfully deleted.",
"renamedFace": "Successfully renamed face to {{name}}",
"trainedFace": "Successfully trained face.",
"updatedFaceScore": "Successfully updated face score."
"updatedFaceScore": "Successfully updated face score to {{name}} ({{score}})."
},
"error": {
"uploadingImageFailed": "Failed to upload image: {{errorMessage}}",

View File

@ -8,7 +8,7 @@
"masksAndZones": "Mask and Zone Editor - Frigate",
"motionTuner": "Motion Tuner - Frigate",
"object": "Debug - Frigate",
"general": "General Settings - Frigate",
"general": "UI Settings - Frigate",
"frigatePlus": "Frigate+ Settings - Frigate",
"notifications": "Notification Settings - Frigate"
},
@ -37,7 +37,7 @@
"noCamera": "No Camera"
},
"general": {
"title": "General Settings",
"title": "UI Settings",
"liveDashboard": {
"title": "Live Dashboard",
"automaticLiveView": {
@ -51,6 +51,10 @@
"displayCameraNames": {
"label": "Always Show Camera Names",
"desc": "Always show the camera names in a chip in the multi-camera live view dashboard."
},
"liveFallbackTimeout": {
"label": "Live Player Fallback Timeout",
"desc": "When a camera's high quality live stream is unavailable, fall back to low bandwidth mode after this many seconds. Default: 3."
}
},
"storedLayouts": {
@ -154,6 +158,7 @@
"description": "Follow the steps below to add a new camera to your Frigate installation.",
"steps": {
"nameAndConnection": "Name & Connection",
"probeOrSnapshot": "Probe or Snapshot",
"streamConfiguration": "Stream Configuration",
"validationAndTesting": "Validation & Testing"
},
@ -172,7 +177,7 @@
"testFailed": "Stream test failed: {{error}}"
},
"step1": {
"description": "Enter your camera details and test the connection.",
"description": "Enter your camera details and choose to probe the camera or manually select the brand.",
"cameraName": "Camera Name",
"cameraNamePlaceholder": "e.g., front_door or Back Yard Overview",
"host": "Host/IP Address",
@ -188,33 +193,65 @@
"brandInformation": "Brand information",
"brandUrlFormat": "For cameras with the RTSP URL format as: {{exampleUrl}}",
"customUrlPlaceholder": "rtsp://username:password@host:port/path",
"testConnection": "Test Connection",
"testSuccess": "Connection test successful!",
"testFailed": "Connection test failed. Please check your input and try again.",
"streamDetails": "Stream Details",
"testing": {
"probingMetadata": "Probing camera metadata...",
"fetchingSnapshot": "Fetching camera snapshot..."
},
"warnings": {
"noSnapshot": "Unable to fetch a snapshot from the configured stream."
},
"connectionSettings": "Connection Settings",
"detectionMethod": "Stream Detection Method",
"onvifPort": "ONVIF Port",
"probeMode": "Probe camera",
"manualMode": "Manual selection",
"detectionMethodDescription": "Probe the camera with ONVIF (if supported) to find camera stream URLs, or manually select the camera brand to use pre-defined URLs. To enter a custom RTSP URL, choose the manual method and select \"Other\".",
"onvifPortDescription": "For cameras that support ONVIF, this is usually 80 or 8080.",
"useDigestAuth": "Use digest authentication",
"useDigestAuthDescription": "Use HTTP digest authentication for ONVIF. Some cameras may require a dedicated ONVIF username/password instead of the standard admin user.",
"errors": {
"brandOrCustomUrlRequired": "Either select a camera brand with host/IP or choose 'Other' with a custom URL",
"nameRequired": "Camera name is required",
"nameLength": "Camera name must be 64 characters or less",
"invalidCharacters": "Camera name contains invalid characters",
"nameExists": "Camera name already exists",
"customUrlRtspRequired": "Custom URLs must begin with \"rtsp://\". Manual configuration is required for non-RTSP camera streams.",
"brands": {
"reolink-rtsp": "Reolink RTSP is not recommended. Enable HTTP in the camera's firmware settings and restart the wizard."
}
},
"docs": {
"reolink": "https://docs.frigate.video/configuration/camera_specific.html#reolink-cameras"
"customUrlRtspRequired": "Custom URLs must begin with \"rtsp://\". Manual configuration is required for non-RTSP camera streams."
}
},
"step2": {
"description": "Probe the camera for available streams or configure manual settings based on your selected detection method.",
"testSuccess": "Connection test successful!",
"testFailed": "Connection test failed. Please check your input and try again.",
"testFailedTitle": "Test Failed",
"streamDetails": "Stream Details",
"probing": "Probing camera...",
"retry": "Retry",
"testing": {
"probingMetadata": "Probing camera metadata...",
"fetchingSnapshot": "Fetching camera snapshot..."
},
"probeFailed": "Failed to probe camera: {{error}}",
"probingDevice": "Probing device...",
"probeSuccessful": "Probe successful",
"probeError": "Probe Error",
"probeNoSuccess": "Probe unsuccessful",
"deviceInfo": "Device Information",
"manufacturer": "Manufacturer",
"model": "Model",
"firmware": "Firmware",
"profiles": "Profiles",
"ptzSupport": "PTZ Support",
"autotrackingSupport": "Autotracking Support",
"presets": "Presets",
"rtspCandidates": "RTSP Candidates",
"rtspCandidatesDescription": "The following RTSP URLs were found from the camera probe. Test the connection to view stream metadata.",
"noRtspCandidates": "No RTSP URLs were found from the camera. Your credentials may be incorrect, or the camera may not support ONVIF or the method used to retrieve RTSP URLs. Go back and enter the RTSP URL manually.",
"candidateStreamTitle": "Candidate {{number}}",
"useCandidate": "Use",
"uriCopy": "Copy",
"uriCopied": "URI copied to clipboard",
"testConnection": "Test Connection",
"toggleUriView": "Click to toggle full URI view",
"connected": "Connected",
"notConnected": "Not Connected",
"errors": {
"hostRequired": "Host/IP address is required"
}
},
"step3": {
"description": "Configure stream roles and add additional streams for your camera.",
"streamsTitle": "Camera Streams",
"addStream": "Add Stream",
@ -222,6 +259,9 @@
"streamTitle": "Stream {{number}}",
"streamUrl": "Stream URL",
"streamUrlPlaceholder": "rtsp://username:password@host:port/path",
"selectStream": "Select a stream",
"searchCandidates": "Search candidates...",
"noStreamFound": "No stream found",
"url": "URL",
"resolution": "Resolution",
"selectResolution": "Select resolution",
@ -253,7 +293,7 @@
"description": "Use go2rtc restreaming to reduce connections to your camera."
}
},
"step3": {
"step4": {
"description": "Final validation and analysis before saving your new camera. Connect each stream before saving.",
"validationTitle": "Stream Validation",
"connectAllStreams": "Connect All Streams",
@ -289,6 +329,9 @@
"audioCodecRecordError": "The AAC audio codec is required to support audio in recordings.",
"audioCodecRequired": "An audio stream is required to support audio detection.",
"restreamingWarning": "Reducing connections to the camera for the record stream may increase CPU usage slightly.",
"brands": {
"reolink-rtsp": "Reolink RTSP is not recommended. Enable HTTP in the camera's firmware settings and restart the wizard."
},
"dahua": {
"substreamWarning": "Substream 1 is locked to a low resolution. Many Dahua / Amcrest / EmpireTech cameras support additional substreams that need to be enabled in the camera's settings. It is recommended to check and utilize those streams if available."
},

View File

@ -44,11 +44,16 @@ self.addEventListener("notificationclick", (event) => {
switch (event.action ?? "default") {
case "markReviewed":
if (event.notification.data) {
fetch("/api/reviews/viewed", {
method: "POST",
headers: { "Content-Type": "application/json", "X-CSRF-TOKEN": 1 },
body: JSON.stringify({ ids: [event.notification.data.id] }),
});
event.waitUntil(
fetch("/api/reviews/viewed", {
method: "POST",
headers: {
"Content-Type": "application/json",
"X-CSRF-TOKEN": 1,
},
body: JSON.stringify({ ids: [event.notification.data.id] }),
}), // eslint-disable-line comma-dangle
);
}
break;
default:
@ -58,7 +63,7 @@ self.addEventListener("notificationclick", (event) => {
// eslint-disable-next-line no-undef
if (clients.openWindow) {
// eslint-disable-next-line no-undef
return clients.openWindow(url);
event.waitUntil(clients.openWindow(url));
}
}
}

View File

@ -2,12 +2,19 @@ import * as React from "react";
import * as LabelPrimitive from "@radix-ui/react-label";
import { useCameraFriendlyName } from "@/hooks/use-camera-friendly-name";
import { CameraConfig } from "@/types/frigateConfig";
import { useZoneFriendlyName } from "@/hooks/use-zone-friendly-name";
interface CameraNameLabelProps
extends React.ComponentPropsWithoutRef<typeof LabelPrimitive.Root> {
camera?: string | CameraConfig;
}
interface ZoneNameLabelProps
extends React.ComponentPropsWithoutRef<typeof LabelPrimitive.Root> {
zone: string;
camera?: string;
}
const CameraNameLabel = React.forwardRef<
React.ElementRef<typeof LabelPrimitive.Root>,
CameraNameLabelProps
@ -21,4 +28,17 @@ const CameraNameLabel = React.forwardRef<
});
CameraNameLabel.displayName = LabelPrimitive.Root.displayName;
export { CameraNameLabel };
const ZoneNameLabel = React.forwardRef<
React.ElementRef<typeof LabelPrimitive.Root>,
ZoneNameLabelProps
>(({ className, zone, camera, ...props }, ref) => {
const displayName = useZoneFriendlyName(zone, camera);
return (
<LabelPrimitive.Root ref={ref} className={className} {...props}>
{displayName}
</LabelPrimitive.Root>
);
});
ZoneNameLabel.displayName = LabelPrimitive.Root.displayName;
export { CameraNameLabel, ZoneNameLabel };

View File

@ -148,13 +148,13 @@ export const ClassificationCard = forwardRef<
<div
className={cn(
"flex flex-col items-start text-white",
data.score ? "text-xs" : "text-sm",
data.score != undefined ? "text-xs" : "text-sm",
)}
>
<div className="smart-capitalize">
{data.name == "unknown" ? t("details.unknown") : data.name}
</div>
{data.score && (
{data.score != undefined && (
<div
className={cn(
"",
@ -398,11 +398,7 @@ export function GroupedClassificationCard({
threshold={threshold}
selected={false}
i18nLibrary={i18nLibrary}
onClick={(data, meta) => {
if (meta || selectedItems.length > 0) {
onClick(data);
}
}}
onClick={() => {}}
>
{children?.(data)}
</ClassificationCard>

View File

@ -14,7 +14,6 @@ type SearchThumbnailProps = {
findSimilar: () => void;
refreshResults: () => void;
showTrackingDetails: () => void;
showSnapshot: () => void;
addTrigger: () => void;
};
@ -24,7 +23,6 @@ export default function SearchThumbnailFooter({
findSimilar,
refreshResults,
showTrackingDetails,
showSnapshot,
addTrigger,
}: SearchThumbnailProps) {
const { t } = useTranslation(["views/search"]);
@ -62,7 +60,6 @@ export default function SearchThumbnailFooter({
findSimilar={findSimilar}
refreshResults={refreshResults}
showTrackingDetails={showTrackingDetails}
showSnapshot={showSnapshot}
addTrigger={addTrigger}
/>
</div>

View File

@ -28,6 +28,7 @@ import {
CustomClassificationModelConfig,
FrigateConfig,
} from "@/types/frigateConfig";
import { ClassificationDatasetResponse } from "@/types/classification";
import { getTranslatedLabel } from "@/utils/i18n";
import { zodResolver } from "@hookform/resolvers/zod";
import axios from "axios";
@ -140,16 +141,19 @@ export default function ClassificationModelEditDialog({
});
// Fetch dataset to get current classes for state models
const { data: dataset } = useSWR<{
[id: string]: string[];
}>(isStateModel ? `classification/${model.name}/dataset` : null, {
revalidateOnFocus: false,
});
const { data: dataset } = useSWR<ClassificationDatasetResponse>(
isStateModel ? `classification/${model.name}/dataset` : null,
{
revalidateOnFocus: false,
},
);
// Update form with classes from dataset when loaded
useEffect(() => {
if (isStateModel && dataset) {
const classes = Object.keys(dataset).filter((key) => key !== "none");
if (isStateModel && dataset?.categories) {
const classes = Object.keys(dataset.categories).filter(
(key) => key !== "none",
);
if (classes.length > 0) {
(form as ReturnType<typeof useForm<StateFormData>>).setValue(
"classes",

View File

@ -15,6 +15,7 @@ import Step3ChooseExamples, {
} from "./wizard/Step3ChooseExamples";
import { cn } from "@/lib/utils";
import { isDesktop } from "react-device-detect";
import axios from "axios";
const OBJECT_STEPS = [
"wizard.steps.nameAndDefine",
@ -120,7 +121,18 @@ export default function ClassificationModelWizardDialog({
dispatch({ type: "PREVIOUS_STEP" });
};
const handleCancel = () => {
const handleCancel = async () => {
// Clean up any generated training images if we're cancelling from Step 3
if (wizardState.step1Data && wizardState.step3Data?.examplesGenerated) {
try {
await axios.delete(
`/classification/${wizardState.step1Data.modelName}`,
);
} catch (error) {
// Silently fail - user is already cancelling
}
}
dispatch({ type: "RESET" });
onClose();
};

View File

@ -10,6 +10,12 @@ import useSWR from "swr";
import { baseUrl } from "@/api/baseUrl";
import { isMobile } from "react-device-detect";
import { cn } from "@/lib/utils";
import {
Tooltip,
TooltipContent,
TooltipTrigger,
} from "@/components/ui/tooltip";
import { TooltipPortal } from "@radix-ui/react-tooltip";
export type Step3FormData = {
examplesGenerated: boolean;
@ -159,18 +165,15 @@ export default function Step3ChooseExamples({
const isLastClass = currentClassIndex === allClasses.length - 1;
if (isLastClass) {
// Assign remaining unclassified images
unknownImages.slice(0, 24).forEach((imageName) => {
if (!newClassifications[imageName]) {
// For state models with 2 classes, assign to the last class
// For object models, assign to "none"
if (step1Data.modelType === "state" && allClasses.length === 2) {
newClassifications[imageName] = allClasses[allClasses.length - 1];
} else {
// For object models, assign remaining unclassified images to "none"
// For state models, this should never happen since we require all images to be classified
if (step1Data.modelType !== "state") {
unknownImages.slice(0, 24).forEach((imageName) => {
if (!newClassifications[imageName]) {
newClassifications[imageName] = "none";
}
}
});
});
}
// All done, trigger training immediately
setImageClassifications(newClassifications);
@ -310,13 +313,44 @@ export default function Step3ChooseExamples({
return images;
}
return images.filter((img) => !imageClassifications[img]);
}, [unknownImages, imageClassifications]);
// If we're viewing a previous class (going back), show images for that class
// Otherwise show only unclassified images
const currentClassInView = allClasses[currentClassIndex];
return images.filter((img) => {
const imgClass = imageClassifications[img];
// Show if: unclassified OR classified with current class we're viewing
return !imgClass || imgClass === currentClassInView;
});
}, [unknownImages, imageClassifications, allClasses, currentClassIndex]);
const allImagesClassified = useMemo(() => {
return unclassifiedImages.length === 0;
}, [unclassifiedImages]);
// For state models on the last class, require all images to be classified
const isLastClass = currentClassIndex === allClasses.length - 1;
const canProceed = useMemo(() => {
if (step1Data.modelType === "state" && isLastClass) {
// Check if all 24 images will be classified after current selections are applied
const totalImages = unknownImages.slice(0, 24).length;
// Count images that will be classified (either already classified or currently selected)
const allImages = unknownImages.slice(0, 24);
const willBeClassified = allImages.filter((img) => {
return imageClassifications[img] || selectedImages.has(img);
}).length;
return willBeClassified >= totalImages;
}
return true;
}, [
step1Data.modelType,
isLastClass,
unknownImages,
imageClassifications,
selectedImages,
]);
const handleBack = useCallback(() => {
if (currentClassIndex > 0) {
const previousClass = allClasses[currentClassIndex - 1];
@ -438,20 +472,35 @@ export default function Step3ChooseExamples({
<Button type="button" onClick={handleBack} className="sm:flex-1">
{t("button.back", { ns: "common" })}
</Button>
<Button
type="button"
onClick={
allImagesClassified
? handleContinue
: handleContinueClassification
}
variant="select"
className="flex items-center justify-center gap-2 sm:flex-1"
disabled={!hasGenerated || isGenerating || isProcessing}
>
{isProcessing && <ActivityIndicator className="size-4" />}
{t("button.continue", { ns: "common" })}
</Button>
<Tooltip>
<TooltipTrigger asChild>
<Button
type="button"
onClick={
allImagesClassified
? handleContinue
: handleContinueClassification
}
variant="select"
className="flex items-center justify-center gap-2 sm:flex-1"
disabled={
!hasGenerated || isGenerating || isProcessing || !canProceed
}
>
{isProcessing && <ActivityIndicator className="size-4" />}
{t("button.continue", { ns: "common" })}
</Button>
</TooltipTrigger>
{!canProceed && (
<TooltipPortal>
<TooltipContent>
{t("wizard.step3.allImagesRequired", {
count: unclassifiedImages.length,
})}
</TooltipContent>
</TooltipPortal>
)}
</Tooltip>
</div>
)}
</div>

View File

@ -76,7 +76,7 @@ import { CameraStreamingDialog } from "../settings/CameraStreamingDialog";
import { DialogTrigger } from "@radix-ui/react-dialog";
import { useStreamingSettings } from "@/context/streaming-settings-provider";
import { Trans, useTranslation } from "react-i18next";
import { CameraNameLabel } from "../camera/CameraNameLabel";
import { CameraNameLabel } from "../camera/FriendlyNameLabel";
import { useAllowedCameras } from "@/hooks/use-allowed-cameras";
import { useIsCustomRole } from "@/hooks/use-is-custom-role";

View File

@ -190,7 +190,7 @@ export function CamerasFilterContent({
key={item}
isChecked={currentCameras?.includes(item) ?? false}
label={item}
isCameraName={true}
type={"camera"}
disabled={
mainCamera !== undefined &&
currentCameras !== undefined &&

View File

@ -1,29 +1,39 @@
import { Switch } from "../ui/switch";
import { Label } from "../ui/label";
import { CameraNameLabel } from "../camera/CameraNameLabel";
import { CameraNameLabel, ZoneNameLabel } from "../camera/FriendlyNameLabel";
type FilterSwitchProps = {
label: string;
disabled?: boolean;
isChecked: boolean;
isCameraName?: boolean;
type?: string;
extraValue?: string;
onCheckedChange: (checked: boolean) => void;
};
export default function FilterSwitch({
label,
disabled = false,
isChecked,
isCameraName = false,
type = "",
extraValue = "",
onCheckedChange,
}: FilterSwitchProps) {
return (
<div className="flex items-center justify-between gap-1">
{isCameraName ? (
{type === "camera" ? (
<CameraNameLabel
className={`mx-2 w-full cursor-pointer text-sm font-medium leading-none text-primary smart-capitalize peer-disabled:cursor-not-allowed peer-disabled:opacity-70 ${disabled ? "text-secondary-foreground" : ""}`}
htmlFor={label}
camera={label}
/>
) : type === "zone" ? (
<ZoneNameLabel
className={`mx-2 w-full cursor-pointer text-sm font-medium leading-none text-primary smart-capitalize peer-disabled:cursor-not-allowed peer-disabled:opacity-70 ${disabled ? "text-secondary-foreground" : ""}`}
htmlFor={label}
camera={extraValue}
zone={label}
/>
) : (
<Label
className={`mx-2 w-full cursor-pointer text-primary smart-capitalize ${disabled ? "text-secondary-foreground" : ""}`}

View File

@ -454,6 +454,24 @@ export function GeneralFilterContent({
onClose,
}: GeneralFilterContentProps) {
const { t } = useTranslation(["components/filter", "views/events"]);
const { data: config } = useSWR<FrigateConfig>("config", {
revalidateOnFocus: false,
});
const allAudioListenLabels = useMemo<string[]>(() => {
if (!config) {
return [];
}
const labels = new Set<string>();
Object.values(config.cameras).forEach((camera) => {
if (camera?.audio?.enabled) {
camera.audio.listen.forEach((label) => {
labels.add(label);
});
}
});
return [...labels].sort();
}, [config]);
return (
<>
<div className="scrollbar-container h-auto max-h-[80dvh] overflow-y-auto overflow-x-hidden">
@ -504,7 +522,10 @@ export function GeneralFilterContent({
{allLabels.map((item) => (
<FilterSwitch
key={item}
label={getTranslatedLabel(item)}
label={getTranslatedLabel(
item,
allAudioListenLabels.includes(item) ? "audio" : "object",
)}
isChecked={filter.labels?.includes(item) ?? false}
onCheckedChange={(isChecked) => {
if (isChecked) {
@ -550,7 +571,8 @@ export function GeneralFilterContent({
{allZones.map((item) => (
<FilterSwitch
key={item}
label={item.replaceAll("_", " ")}
label={item}
type={"zone"}
isChecked={filter.zones?.includes(item) ?? false}
onCheckedChange={(isChecked) => {
if (isChecked) {

View File

@ -53,7 +53,7 @@ import { FrigateConfig } from "@/types/frigateConfig";
import { MdImageSearch } from "react-icons/md";
import { useTranslation } from "react-i18next";
import { getTranslatedLabel } from "@/utils/i18n";
import { CameraNameLabel } from "../camera/CameraNameLabel";
import { CameraNameLabel, ZoneNameLabel } from "../camera/FriendlyNameLabel";
type InputWithTagsProps = {
inputFocused: boolean;
@ -81,6 +81,43 @@ export default function InputWithTags({
revalidateOnFocus: false,
});
const allAudioListenLabels = useMemo<Set<string>>(() => {
if (!config) {
return new Set<string>();
}
const labels = new Set<string>();
Object.values(config.cameras).forEach((camera) => {
if (camera?.audio?.enabled) {
camera.audio.listen.forEach((label) => {
labels.add(label);
});
}
});
return labels;
}, [config]);
const translatedAudioLabelMap = useMemo<Map<string, string>>(() => {
const map = new Map<string, string>();
if (!config) return map;
allAudioListenLabels.forEach((label) => {
// getTranslatedLabel likely depends on i18n internally; including `lang`
// in deps ensures this map is rebuilt when language changes
map.set(label, getTranslatedLabel(label, "audio"));
});
return map;
}, [allAudioListenLabels, config]);
function resolveLabel(value: string) {
const mapped = translatedAudioLabelMap.get(value);
if (mapped) return mapped;
return getTranslatedLabel(
value,
allAudioListenLabels.has(value) ? "audio" : "object",
);
}
const [inputValue, setInputValue] = useState(search || "");
const [currentFilterType, setCurrentFilterType] = useState<FilterType | null>(
null,
@ -421,7 +458,8 @@ export default function InputWithTags({
? t("button.yes", { ns: "common" })
: t("button.no", { ns: "common" });
} else if (filterType === "labels") {
return getTranslatedLabel(String(filterValues));
const value = String(filterValues);
return resolveLabel(value);
} else if (filterType === "search_type") {
return t("filter.searchType." + String(filterValues));
} else {
@ -828,9 +866,11 @@ export default function InputWithTags({
>
{t("filter.label." + filterType)}:{" "}
{filterType === "labels" ? (
getTranslatedLabel(value)
resolveLabel(value)
) : filterType === "cameras" ? (
<CameraNameLabel camera={value} />
) : filterType === "zones" ? (
<ZoneNameLabel zone={value} />
) : (
value.replaceAll("_", " ")
)}
@ -934,6 +974,11 @@ export default function InputWithTags({
<CameraNameLabel camera={suggestion} />
{")"}
</>
) : currentFilterType === "zones" ? (
<>
{suggestion} {" ("} <ZoneNameLabel zone={suggestion} />
{")"}
</>
) : (
suggestion
)
@ -943,6 +988,8 @@ export default function InputWithTags({
{currentFilterType ? (
currentFilterType === "cameras" ? (
<CameraNameLabel camera={suggestion} />
) : currentFilterType === "zones" ? (
<ZoneNameLabel zone={suggestion} />
) : (
formatFilterValues(currentFilterType, suggestion)
)

View File

@ -47,7 +47,7 @@ import {
import { useTranslation } from "react-i18next";
import { useDateLocale } from "@/hooks/use-date-locale";
import { useIsAdmin } from "@/hooks/use-is-admin";
import { CameraNameLabel } from "../camera/CameraNameLabel";
import { CameraNameLabel } from "../camera/FriendlyNameLabel";
type LiveContextMenuProps = {
className?: string;

View File

@ -4,12 +4,7 @@ import { FrigateConfig } from "@/types/frigateConfig";
import { baseUrl } from "@/api/baseUrl";
import { toast } from "sonner";
import axios from "axios";
import { LuCamera, LuDownload, LuTrash2 } from "react-icons/lu";
import { FiMoreVertical } from "react-icons/fi";
import { FaArrowsRotate } from "react-icons/fa6";
import { MdImageSearch } from "react-icons/md";
import FrigatePlusIcon from "@/components/icons/FrigatePlusIcon";
import { isMobileOnly } from "react-device-detect";
import { buttonVariants } from "@/components/ui/button";
import {
ContextMenu,
@ -33,15 +28,8 @@ import {
AlertDialogHeader,
AlertDialogTitle,
} from "@/components/ui/alert-dialog";
import {
Tooltip,
TooltipContent,
TooltipTrigger,
} from "@/components/ui/tooltip";
import useSWR from "swr";
import { Trans, useTranslation } from "react-i18next";
import { BsFillLightningFill } from "react-icons/bs";
import BlurredIconButton from "../button/BlurredIconButton";
type SearchResultActionsProps = {
@ -49,7 +37,6 @@ type SearchResultActionsProps = {
findSimilar: () => void;
refreshResults: () => void;
showTrackingDetails: () => void;
showSnapshot: () => void;
addTrigger: () => void;
isContextMenu?: boolean;
children?: ReactNode;
@ -60,7 +47,6 @@ export default function SearchResultActions({
findSimilar,
refreshResults,
showTrackingDetails,
showSnapshot,
addTrigger,
isContextMenu = false,
children,
@ -107,7 +93,6 @@ export default function SearchResultActions({
href={`${baseUrl}api/events/${searchResult.id}/clip.mp4`}
download={`${searchResult.camera}_${searchResult.label}.mp4`}
>
<LuDownload className="mr-2 size-4" />
<span>{t("itemMenu.downloadVideo.label")}</span>
</a>
</MenuItem>
@ -119,7 +104,6 @@ export default function SearchResultActions({
href={`${baseUrl}api/events/${searchResult.id}/snapshot.jpg`}
download={`${searchResult.camera}_${searchResult.label}.jpg`}
>
<LuCamera className="mr-2 size-4" />
<span>{t("itemMenu.downloadSnapshot.label")}</span>
</a>
</MenuItem>
@ -129,48 +113,31 @@ export default function SearchResultActions({
aria-label={t("itemMenu.viewTrackingDetails.aria")}
onClick={showTrackingDetails}
>
<FaArrowsRotate className="mr-2 size-4" />
<span>{t("itemMenu.viewTrackingDetails.label")}</span>
</MenuItem>
)}
{config?.semantic_search?.enabled && isContextMenu && (
<MenuItem
aria-label={t("itemMenu.findSimilar.aria")}
onClick={findSimilar}
>
<MdImageSearch className="mr-2 size-4" />
<span>{t("itemMenu.findSimilar.label")}</span>
</MenuItem>
)}
{config?.semantic_search?.enabled &&
searchResult.data.type == "object" && (
<MenuItem
aria-label={t("itemMenu.findSimilar.aria")}
onClick={findSimilar}
>
<span>{t("itemMenu.findSimilar.label")}</span>
</MenuItem>
)}
{config?.semantic_search?.enabled &&
searchResult.data.type == "object" && (
<MenuItem
aria-label={t("itemMenu.addTrigger.aria")}
onClick={addTrigger}
>
<BsFillLightningFill className="mr-2 size-4" />
<span>{t("itemMenu.addTrigger.label")}</span>
</MenuItem>
)}
{isMobileOnly &&
config?.plus?.enabled &&
searchResult.has_snapshot &&
searchResult.end_time &&
searchResult.data.type == "object" &&
!searchResult.plus_id && (
<MenuItem
aria-label={t("itemMenu.submitToPlus.aria")}
onClick={showSnapshot}
>
<FrigatePlusIcon className="mr-2 size-4 cursor-pointer text-primary" />
<span>{t("itemMenu.submitToPlus.label")}</span>
</MenuItem>
)}
<MenuItem
aria-label={t("itemMenu.deleteTrackedObject.label")}
onClick={() => setDeleteDialogOpen(true)}
>
<LuTrash2 className="mr-2 size-4" />
<span>{t("button.delete", { ns: "common" })}</span>
</MenuItem>
</>
@ -211,44 +178,6 @@ export default function SearchResultActions({
</ContextMenu>
) : (
<>
{config?.semantic_search?.enabled &&
searchResult.data.type == "object" && (
<Tooltip>
<TooltipTrigger asChild>
<BlurredIconButton
onClick={findSimilar}
aria-label={t("itemMenu.findSimilar.aria")}
>
<MdImageSearch className="size-5" />
</BlurredIconButton>
</TooltipTrigger>
<TooltipContent>
{t("itemMenu.findSimilar.label")}
</TooltipContent>
</Tooltip>
)}
{!isMobileOnly &&
config?.plus?.enabled &&
searchResult.has_snapshot &&
searchResult.end_time &&
searchResult.data.type == "object" &&
!searchResult.plus_id && (
<Tooltip>
<TooltipTrigger asChild>
<BlurredIconButton
onClick={showSnapshot}
aria-label={t("itemMenu.submitToPlus.aria")}
>
<FrigatePlusIcon className="size-5" />
</BlurredIconButton>
</TooltipTrigger>
<TooltipContent>
{t("itemMenu.submitToPlus.label")}
</TooltipContent>
</Tooltip>
)}
<DropdownMenu>
<DropdownMenuTrigger asChild>
<BlurredIconButton aria-label={t("itemMenu.more.aria")}>

View File

@ -4,6 +4,7 @@ import {
useEffect,
useState,
useCallback,
useRef,
} from "react";
import { createPortal } from "react-dom";
import { motion, AnimatePresence } from "framer-motion";
@ -121,17 +122,20 @@ export function MobilePagePortal({
type MobilePageContentProps = {
children: React.ReactNode;
className?: string;
scrollerRef?: React.RefObject<HTMLDivElement>;
};
export function MobilePageContent({
children,
className,
scrollerRef,
}: MobilePageContentProps) {
const context = useContext(MobilePageContext);
if (!context)
throw new Error("MobilePageContent must be used within MobilePage");
const [isVisible, setIsVisible] = useState(context.open);
const containerRef = useRef<HTMLDivElement | null>(null);
useEffect(() => {
if (context.open) {
@ -140,15 +144,27 @@ export function MobilePageContent({
}, [context.open]);
const handleAnimationComplete = () => {
if (!context.open) {
if (context.open) {
// After opening animation completes, ensure scroller is at the top
if (scrollerRef?.current) {
scrollerRef.current.scrollTop = 0;
}
} else {
setIsVisible(false);
}
};
useEffect(() => {
if (context.open && scrollerRef?.current) {
scrollerRef.current.scrollTop = 0;
}
}, [context.open, scrollerRef]);
return (
<AnimatePresence>
{isVisible && (
<motion.div
ref={containerRef}
className={cn(
"fixed inset-0 z-50 mb-12 bg-background",
isPWA && "mb-16",

View File

@ -46,13 +46,13 @@ export default function NavItem({
onClick={onClick}
className={({ isActive }) =>
cn(
"flex flex-col items-center justify-center rounded-lg",
"flex flex-col items-center justify-center rounded-lg p-[6px]",
className,
variants[item.variant ?? "primary"][isActive ? "active" : "inactive"],
)
}
>
<Icon className="size-5 md:m-[6px]" />
<Icon className="size-5" />
</NavLink>
);

View File

@ -97,14 +97,12 @@ export default function ClassificationSelectionDialog({
return (
<div className={className ?? "flex"}>
{newClass && (
<TextEntryDialog
open={true}
setOpen={setNewClass}
title={t("createCategory.new")}
onSave={(newCat) => onCategorizeImage(newCat)}
/>
)}
<TextEntryDialog
open={newClass}
setOpen={setNewClass}
title={t("createCategory.new")}
onSave={(newCat) => onCategorizeImage(newCat)}
/>
<Tooltip>
<Selector>

View File

@ -25,7 +25,7 @@ import {
} from "@/components/ui/dialog";
import { useTranslation } from "react-i18next";
import { FrigateConfig } from "@/types/frigateConfig";
import { CameraNameLabel } from "../camera/CameraNameLabel";
import { CameraNameLabel } from "../camera/FriendlyNameLabel";
import { isDesktop, isMobile } from "react-device-detect";
import { cn } from "@/lib/utils";
import {

View File

@ -159,7 +159,7 @@ export default function CreateTriggerDialog({
});
const onSubmit = async (values: z.infer<typeof formSchema>) => {
if (trigger) {
if (trigger && existingTriggerNames.includes(trigger.name)) {
onEdit({ ...values });
} else {
onCreate(

View File

@ -13,7 +13,8 @@ import { zodResolver } from "@hookform/resolvers/zod";
import { useForm } from "react-hook-form";
import { z } from "zod";
import ActivityIndicator from "../indicators/activity-indicator";
import { useEffect, useState } from "react";
import { useEffect, useState, useMemo } from "react";
import useSWR from "swr";
import {
Dialog,
DialogContent,
@ -35,6 +36,7 @@ import { LuCheck, LuX } from "react-icons/lu";
import { useTranslation } from "react-i18next";
import { isDesktop, isMobile } from "react-device-detect";
import { cn } from "@/lib/utils";
import { FrigateConfig } from "@/types/frigateConfig";
import {
MobilePage,
MobilePageContent,
@ -54,9 +56,15 @@ export default function CreateUserDialog({
onCreate,
onCancel,
}: CreateUserOverlayProps) {
const { data: config } = useSWR<FrigateConfig>("config");
const { t } = useTranslation(["views/settings"]);
const [isLoading, setIsLoading] = useState<boolean>(false);
const roles = useMemo(() => {
const existingRoles = config ? Object.keys(config.auth?.roles || {}) : [];
return Array.from(new Set(["admin", "viewer", ...(existingRoles || [])]));
}, [config]);
const formSchema = z
.object({
user: z
@ -69,7 +77,7 @@ export default function CreateUserDialog({
confirmPassword: z
.string()
.min(1, t("users.dialog.createUser.confirmPassword")),
role: z.enum(["admin", "viewer"]),
role: z.string().min(1),
})
.refine((data) => data.password === data.confirmPassword, {
message: t("users.dialog.form.password.notMatch"),
@ -246,24 +254,22 @@ export default function CreateUserDialog({
</SelectTrigger>
</FormControl>
<SelectContent>
<SelectItem
value="admin"
className="flex items-center gap-2"
>
<div className="flex items-center gap-2">
<Shield className="h-4 w-4 text-primary" />
<span>{t("role.admin", { ns: "common" })}</span>
</div>
</SelectItem>
<SelectItem
value="viewer"
className="flex items-center gap-2"
>
<div className="flex items-center gap-2">
<User className="h-4 w-4 text-muted-foreground" />
<span>{t("role.viewer", { ns: "common" })}</span>
</div>
</SelectItem>
{roles.map((r) => (
<SelectItem
value={r}
key={r}
className="flex items-center gap-2"
>
<div className="flex items-center gap-2">
{r === "admin" ? (
<Shield className="h-4 w-4 text-primary" />
) : (
<User className="h-4 w-4 text-muted-foreground" />
)}
<span>{t(`role.${r}`, { ns: "common" }) || r}</span>
</div>
</SelectItem>
))}
</SelectContent>
</Select>
<FormDescription className="text-xs text-muted-foreground">

View File

@ -24,7 +24,7 @@ import {
} from "@/components/ui/dialog";
import { Trans, useTranslation } from "react-i18next";
import { FrigateConfig } from "@/types/frigateConfig";
import { CameraNameLabel } from "@/components/camera/CameraNameLabel";
import { CameraNameLabel } from "@/components/camera/FriendlyNameLabel";
type EditRoleCamerasOverlayProps = {
show: boolean;

View File

@ -12,6 +12,7 @@ import {
DropdownMenuContent,
DropdownMenuItem,
DropdownMenuLabel,
DropdownMenuSeparator,
DropdownMenuTrigger,
} from "@/components/ui/dropdown-menu";
import {
@ -20,7 +21,6 @@ import {
TooltipTrigger,
} from "@/components/ui/tooltip";
import { isDesktop, isMobile } from "react-device-detect";
import { LuPlus, LuScanFace } from "react-icons/lu";
import { useTranslation } from "react-i18next";
import { cn } from "@/lib/utils";
import React, { ReactNode, useMemo, useState } from "react";
@ -89,27 +89,26 @@ export default function FaceSelectionDialog({
<DropdownMenuLabel>{t("trainFaceAs")}</DropdownMenuLabel>
<div
className={cn(
"flex max-h-[40dvh] flex-col overflow-y-auto",
"flex max-h-[40dvh] flex-col overflow-y-auto overflow-x-hidden",
isMobile && "gap-2 pb-4",
)}
>
<SelectorItem
className="flex cursor-pointer gap-2 smart-capitalize"
onClick={() => setNewFace(true)}
>
<LuPlus />
{t("createFaceLibrary.new")}
</SelectorItem>
{faceNames.sort().map((faceName) => (
<SelectorItem
key={faceName}
className="flex cursor-pointer gap-2 smart-capitalize"
onClick={() => onTrainAttempt(faceName)}
>
<LuScanFace />
{faceName}
</SelectorItem>
))}
<DropdownMenuSeparator />
<SelectorItem
className="flex cursor-pointer gap-2 smart-capitalize"
onClick={() => setNewFace(true)}
>
{t("createFaceLibrary.new")}
</SelectorItem>
</div>
</SelectorContent>
</Selector>

View File

@ -171,6 +171,18 @@ export default function ImagePicker({
alt={selectedImage?.label || "Selected image"}
className="size-16 rounded object-cover"
onLoad={() => handleImageLoad(selectedImageId || "")}
onError={(e) => {
// If trigger thumbnail fails to load, fall back to event thumbnail
if (!selectedImage) {
const target = e.target as HTMLImageElement;
if (
target.src.includes("clips/triggers") &&
selectedImageId
) {
target.src = `${apiHost}api/events/${selectedImageId}/thumbnail.webp`;
}
}
}}
loading="lazy"
/>
{selectedImageId && !loadedImages.has(selectedImageId) && (

View File

@ -12,13 +12,13 @@ export function ImageShadowOverlay({
<>
<div
className={cn(
"pointer-events-none absolute inset-x-0 top-0 z-10 h-[30%] w-full rounded-lg bg-gradient-to-b from-black/20 to-transparent md:rounded-2xl",
"pointer-events-none absolute inset-x-0 top-0 z-10 h-[30%] w-full rounded-lg bg-gradient-to-b from-black/20 to-transparent",
upperClassName,
)}
/>
<div
className={cn(
"pointer-events-none absolute inset-x-0 bottom-0 z-10 h-[10%] w-full rounded-lg bg-gradient-to-t from-black/20 to-transparent md:rounded-2xl",
"pointer-events-none absolute inset-x-0 bottom-0 z-10 h-[10%] w-full rounded-lg bg-gradient-to-t from-black/20 to-transparent",
lowerClassName,
)}
/>

View File

@ -4,7 +4,7 @@ import { Button } from "../ui/button";
import { FaVideo } from "react-icons/fa";
import { isMobile } from "react-device-detect";
import { useTranslation } from "react-i18next";
import { CameraNameLabel } from "../camera/CameraNameLabel";
import { CameraNameLabel } from "../camera/FriendlyNameLabel";
type MobileCameraDrawerProps = {
allCameras: string[];

View File

@ -12,6 +12,7 @@ import { TooltipPortal } from "@radix-ui/react-tooltip";
import { cn } from "@/lib/utils";
import { useTranslation } from "react-i18next";
import { Event } from "@/types/event";
import { resolveZoneName } from "@/hooks/use-zone-friendly-name";
// Use a small tolerance (10ms) for browsers with seek precision by-design issues
const TOLERANCE = 0.01;
@ -114,6 +115,10 @@ export default function ObjectTrackOverlay({
{ revalidateOnFocus: false },
);
const getZonesFriendlyNames = (zones: string[], config: FrigateConfig) => {
return zones?.map((zone) => resolveZoneName(config, zone)) ?? [];
};
const timelineResults = useMemo(() => {
// Group timeline entries by source_id
if (!timelineData) return selectedObjectIds.map(() => []);
@ -127,8 +132,19 @@ export default function ObjectTrackOverlay({
}
// Return timeline arrays in the same order as selectedObjectIds
return selectedObjectIds.map((id) => grouped[id] || []);
}, [selectedObjectIds, timelineData]);
return selectedObjectIds.map((id) => {
const entries = grouped[id] || [];
return entries.map((event) => ({
...event,
data: {
...event.data,
zones_friendly_names: config
? getZonesFriendlyNames(event.data?.zones, config)
: [],
},
}));
});
}, [selectedObjectIds, timelineData, config]);
const typeColorMap = useMemo(
() => ({

View File

@ -141,50 +141,52 @@ export function AnnotationSettingsPane({
<Form {...form}>
<form
onSubmit={form.handleSubmit(onSubmit)}
className="flex flex-1 flex-col space-y-6"
className="flex flex-1 flex-col space-y-3"
>
<FormField
control={form.control}
name="annotationOffset"
render={({ field }) => (
<FormItem className="flex flex-row items-start justify-between space-x-2">
<div className="flex flex-col gap-1">
<FormLabel>
{t("trackingDetails.annotationSettings.offset.label")}
</FormLabel>
<FormDescription>
<Trans ns="views/explore">
trackingDetails.annotationSettings.offset.millisecondsToOffset
</Trans>
<FormMessage />
<div className="mt-2">
{t("trackingDetails.annotationSettings.offset.tips")}
<div className="mt-2 flex items-center text-primary">
<Link
to={getLocaleDocUrl("configuration/reference")}
target="_blank"
rel="noopener noreferrer"
className="inline"
>
{t("readTheDocumentation", { ns: "common" })}
<LuExternalLink className="ml-2 inline-flex size-3" />
</Link>
</div>
<>
<FormItem className="flex flex-row items-start justify-between space-x-2">
<div className="flex flex-col gap-1">
<FormLabel>
{t("trackingDetails.annotationSettings.offset.label")}
</FormLabel>
<FormDescription>
<Trans ns="views/explore">
trackingDetails.annotationSettings.offset.millisecondsToOffset
</Trans>
<FormMessage />
</FormDescription>
</div>
<div className="flex flex-col gap-3">
<div className="min-w-24">
<FormControl>
<Input
className="text-md w-full border border-input bg-background p-2 text-center hover:bg-accent hover:text-accent-foreground dark:[color-scheme:dark]"
placeholder="0"
{...field}
/>
</FormControl>
</div>
</FormDescription>
</div>
<div className="flex flex-col gap-3">
<div className="min-w-24">
<FormControl>
<Input
className="text-md w-full border border-input bg-background p-2 text-center hover:bg-accent hover:text-accent-foreground dark:[color-scheme:dark]"
placeholder="0"
{...field}
/>
</FormControl>
</div>
</FormItem>
<div className="mt-1 text-sm text-secondary-foreground">
{t("trackingDetails.annotationSettings.offset.tips")}
<div className="mt-2 flex items-center text-primary-variant">
<Link
to={getLocaleDocUrl("configuration/reference")}
target="_blank"
rel="noopener noreferrer"
className="inline"
>
{t("readTheDocumentation", { ns: "common" })}
<LuExternalLink className="ml-2 inline-flex size-3" />
</Link>
</div>
</div>
</FormItem>
</>
)}
/>

View File

@ -55,29 +55,32 @@ export default function DetailActionsMenu({
</DropdownMenuTrigger>
<DropdownMenuPortal>
<DropdownMenuContent align="end">
<DropdownMenuItem>
<a
className="w-full"
href={`${baseUrl}api/events/${search.id}/snapshot.jpg?bbox=1`}
download={`${search.camera}_${search.label}.jpg`}
>
<div className="flex cursor-pointer items-center gap-2">
<span>{t("itemMenu.downloadSnapshot.label")}</span>
</div>
</a>
</DropdownMenuItem>
<DropdownMenuItem>
<a
className="w-full"
href={`${baseUrl}api/${search.camera}/${clipTimeRange}/clip.mp4`}
download
>
<div className="flex cursor-pointer items-center gap-2">
<span>{t("itemMenu.downloadVideo.label")}</span>
</div>
</a>
</DropdownMenuItem>
{search.has_snapshot && (
<DropdownMenuItem>
<a
className="w-full"
href={`${baseUrl}api/events/${search.id}/snapshot.jpg?bbox=1`}
download={`${search.camera}_${search.label}.jpg`}
>
<div className="flex cursor-pointer items-center gap-2">
<span>{t("itemMenu.downloadSnapshot.label")}</span>
</div>
</a>
</DropdownMenuItem>
)}
{search.has_clip && (
<DropdownMenuItem>
<a
className="w-full"
href={`${baseUrl}api/${search.camera}/${clipTimeRange}/clip.mp4`}
download
>
<div className="flex cursor-pointer items-center gap-2">
<span>{t("itemMenu.downloadVideo.label")}</span>
</div>
</a>
</DropdownMenuItem>
)}
{config?.semantic_search.enabled &&
setSimilarity != undefined &&
@ -111,6 +114,23 @@ export default function DetailActionsMenu({
</div>
</DropdownMenuItem>
)}
{config?.semantic_search.enabled && search.data.type == "object" && (
<DropdownMenuItem
onClick={() => {
setIsOpen(false);
setTimeout(() => {
navigate(
`/settings?page=triggers&camera=${search.camera}&event_id=${search.id}`,
);
}, 0);
}}
>
<div className="flex cursor-pointer items-center gap-2">
<span>{t("itemMenu.addTrigger.label")}</span>
</div>
</DropdownMenuItem>
)}
</DropdownMenuContent>
</DropdownMenuPortal>
</DropdownMenu>

View File

@ -8,6 +8,9 @@ import {
import { TooltipPortal } from "@radix-ui/react-tooltip";
import { getLifecycleItemDescription } from "@/utils/lifecycleUtil";
import { useTranslation } from "react-i18next";
import { resolveZoneName } from "@/hooks/use-zone-friendly-name";
import { FrigateConfig } from "@/types/frigateConfig";
import useSWR from "swr";
type ObjectPathProps = {
positions?: Position[];
@ -42,16 +45,31 @@ export function ObjectPath({
visible = true,
}: ObjectPathProps) {
const { t } = useTranslation(["views/explore"]);
const { data: config } = useSWR<FrigateConfig>("config");
const getAbsolutePositions = useCallback(() => {
if (!imgRef.current || !positions) return [];
const imgRect = imgRef.current.getBoundingClientRect();
return positions.map((pos) => ({
x: pos.x * imgRect.width,
y: pos.y * imgRect.height,
timestamp: pos.timestamp,
lifecycle_item: pos.lifecycle_item,
}));
}, [positions, imgRef]);
return positions.map((pos) => {
return {
x: pos.x * imgRect.width,
y: pos.y * imgRect.height,
timestamp: pos.timestamp,
lifecycle_item: pos.lifecycle_item?.data?.zones
? {
...pos.lifecycle_item,
data: {
...pos.lifecycle_item?.data,
zones_friendly_names: pos.lifecycle_item?.data.zones.map(
(zone) => {
return resolveZoneName(config, zone);
},
),
},
}
: pos.lifecycle_item,
};
});
}, [imgRef, positions, config]);
const generateStraightPath = useCallback((points: Position[]) => {
if (!points || points.length < 2) return "";

View File

@ -34,9 +34,11 @@ import ActivityIndicator from "@/components/indicators/activity-indicator";
import {
FaArrowRight,
FaCheckCircle,
FaChevronDown,
FaChevronLeft,
FaChevronRight,
FaMicrophone,
FaCheck,
FaTimes,
} from "react-icons/fa";
import { TrackingDetails } from "./TrackingDetails";
import { AnnotationSettingsPane } from "./AnnotationSettingsPane";
@ -72,7 +74,12 @@ import {
PopoverContent,
PopoverTrigger,
} from "@/components/ui/popover";
import { Drawer, DrawerContent, DrawerTrigger } from "@/components/ui/drawer";
import {
Drawer,
DrawerContent,
DrawerTitle,
DrawerTrigger,
} from "@/components/ui/drawer";
import { LuInfo } from "react-icons/lu";
import { TooltipPortal } from "@radix-ui/react-tooltip";
import { FaPencilAlt } from "react-icons/fa";
@ -80,10 +87,11 @@ import TextEntryDialog from "@/components/overlay/dialog/TextEntryDialog";
import { Trans, useTranslation } from "react-i18next";
import { useIsAdmin } from "@/hooks/use-is-admin";
import { getTranslatedLabel } from "@/utils/i18n";
import { CameraNameLabel } from "@/components/camera/CameraNameLabel";
import { CameraNameLabel } from "@/components/camera/FriendlyNameLabel";
import { DialogPortal } from "@radix-ui/react-dialog";
import { useDetailStream } from "@/context/detail-stream-context";
import { PiSlidersHorizontalBold } from "react-icons/pi";
import { HiSparkles } from "react-icons/hi";
const SEARCH_TABS = ["snapshot", "tracking_details"] as const;
export type SearchTab = (typeof SEARCH_TABS)[number];
@ -126,7 +134,7 @@ function TabsWithActions({
return (
<div className="flex items-center justify-between gap-1">
<ScrollArea className="flex-1 whitespace-nowrap">
<div className="mb-2 flex flex-row md:mb-0">
<div className="mb-2 flex flex-row">
<ToggleGroup
className="*:rounded-md *:px-3 *:py-4"
type="single"
@ -224,6 +232,7 @@ function AnnotationSettings({
const Overlay = isDesktop ? Popover : Drawer;
const Trigger = isDesktop ? PopoverTrigger : DrawerTrigger;
const Content = isDesktop ? PopoverContent : DrawerContent;
const Title = isDesktop ? "div" : DrawerTitle;
const contentProps = isDesktop
? { align: "end" as const, container: container ?? undefined }
: {};
@ -248,7 +257,9 @@ function AnnotationSettings({
<PiSlidersHorizontalBold className="size-5" />
</Button>
</Trigger>
<Title className="sr-only">
{t("trackingDetails.adjustAnnotationSettings")}
</Title>
<Content
className={
isDesktop
@ -306,7 +317,7 @@ function DialogContentComponent({
if (page === "tracking_details") {
return (
<TrackingDetails
className={cn("size-full", !isDesktop && "flex flex-col gap-4")}
className={cn(isDesktop ? "size-full" : "flex flex-col gap-4")}
event={search as unknown as Event}
tabs={
isDesktop ? (
@ -340,7 +351,12 @@ function DialogContentComponent({
}
/>
) : (
<div className={cn(!isDesktop ? "mb-4 w-full" : "size-full")}>
<div
className={cn(
"max-w-lg",
!isDesktop ? "mb-4 w-full" : "mx-auto size-full",
)}
>
<img
className="w-full select-none rounded-lg object-contain transition-opacity"
style={
@ -359,16 +375,11 @@ function DialogContentComponent({
if (isDesktop) {
return (
<div className="flex h-full gap-4 overflow-hidden">
<div
className={cn(
"scrollbar-container flex-[3] overflow-y-hidden",
!search.has_snapshot && "flex-[2]",
)}
>
<div className="grid h-full w-full grid-cols-[60%_40%] gap-4">
<div className="scrollbar-container min-w-0 overflow-y-auto overflow-x-hidden">
{snapshotElement}
</div>
<div className="flex flex-col gap-4 overflow-hidden md:basis-2/5">
<div className="flex min-w-0 flex-col gap-4 pr-2">
<TabsWithActions
search={search}
searchTabs={searchTabs}
@ -381,7 +392,7 @@ function DialogContentComponent({
setIsPopoverOpen={setIsPopoverOpen}
dialogContainer={dialogContainer}
/>
<div className="scrollbar-container flex-1 overflow-y-auto">
<div className="scrollbar-container min-w-0 flex-1 overflow-y-auto overflow-x-hidden px-4">
<ObjectDetailsTab
search={search}
config={config}
@ -584,8 +595,13 @@ export default function SearchDetailDialog({
"scrollbar-container overflow-y-auto",
isDesktop &&
"max-h-[95dvh] sm:max-w-xl md:max-w-4xl lg:max-w-[70%]",
isMobile && "px-4",
isMobile && "flex h-full flex-col px-4",
)}
onEscapeKeyDown={(event) => {
if (isPopoverOpen) {
event.preventDefault();
}
}}
onInteractOutside={(e) => {
if (isPopoverOpen) {
e.preventDefault();
@ -596,7 +612,7 @@ export default function SearchDetailDialog({
}
}}
>
<Header>
<Header className={cn(!isDesktop && "top-0 z-[60] mb-0")}>
<Title>{t("trackedObjectDetails")}</Title>
<Description className="sr-only">
{t("trackedObjectDetails")}
@ -667,6 +683,22 @@ function ObjectDetailsTab({
const mutate = useGlobalMutation();
// Helper to map over SWR cached search results while preserving
// either paginated format (SearchResult[][]) or flat format (SearchResult[])
const mapSearchResults = useCallback(
(
currentData: SearchResult[][] | SearchResult[] | undefined,
fn: (event: SearchResult) => SearchResult,
) => {
if (!currentData) return currentData;
if (Array.isArray(currentData[0])) {
return (currentData as SearchResult[][]).map((page) => page.map(fn));
}
return (currentData as SearchResult[]).map(fn);
},
[],
);
// users
const isAdmin = useIsAdmin();
@ -676,6 +708,8 @@ function ObjectDetailsTab({
const [desc, setDesc] = useState(search?.data.description);
const [isSubLabelDialogOpen, setIsSubLabelDialogOpen] = useState(false);
const [isLPRDialogOpen, setIsLPRDialogOpen] = useState(false);
const [isEditingDesc, setIsEditingDesc] = useState(false);
const originalDescRef = useRef<string | null>(null);
const handleDescriptionFocus = useCallback(() => {
setInputFocused(true);
@ -792,17 +826,12 @@ function ObjectDetailsTab({
(key.includes("events") ||
key.includes("events/search") ||
key.includes("events/explore")),
(currentData: SearchResult[][] | SearchResult[] | undefined) => {
if (!currentData) return currentData;
// optimistic update
return currentData
.flat()
.map((event) =>
event.id === search.id
? { ...event, data: { ...event.data, description: desc } }
: event,
);
},
(currentData: SearchResult[][] | SearchResult[] | undefined) =>
mapSearchResults(currentData, (event) =>
event.id === search.id
? { ...event, data: { ...event.data, description: desc } }
: event,
),
{
optimisticData: true,
rollbackOnError: true,
@ -825,7 +854,7 @@ function ObjectDetailsTab({
);
setDesc(search.data.description);
});
}, [desc, search, mutate, t]);
}, [desc, search, mutate, t, mapSearchResults]);
const regenerateDescription = useCallback(
(source: "snapshot" | "thumbnails") => {
@ -897,9 +926,8 @@ function ObjectDetailsTab({
(key.includes("events") ||
key.includes("events/search") ||
key.includes("events/explore")),
(currentData: SearchResult[][] | SearchResult[] | undefined) => {
if (!currentData) return currentData;
return currentData.flat().map((event) =>
(currentData: SearchResult[][] | SearchResult[] | undefined) =>
mapSearchResults(currentData, (event) =>
event.id === search.id
? {
...event,
@ -910,8 +938,7 @@ function ObjectDetailsTab({
},
}
: event,
);
},
),
{
optimisticData: true,
rollbackOnError: true,
@ -945,7 +972,7 @@ function ObjectDetailsTab({
);
});
},
[search, apiHost, mutate, setSearch, t],
[search, apiHost, mutate, setSearch, t, mapSearchResults],
);
// recognized plate
@ -974,9 +1001,8 @@ function ObjectDetailsTab({
(key.includes("events") ||
key.includes("events/search") ||
key.includes("events/explore")),
(currentData: SearchResult[][] | SearchResult[] | undefined) => {
if (!currentData) return currentData;
return currentData.flat().map((event) =>
(currentData: SearchResult[][] | SearchResult[] | undefined) =>
mapSearchResults(currentData, (event) =>
event.id === search.id
? {
...event,
@ -987,8 +1013,7 @@ function ObjectDetailsTab({
},
}
: event,
);
},
),
{
optimisticData: true,
rollbackOnError: true,
@ -1022,7 +1047,7 @@ function ObjectDetailsTab({
);
});
},
[search, apiHost, mutate, setSearch, t],
[search, apiHost, mutate, setSearch, t, mapSearchResults],
);
// speech transcription
@ -1078,15 +1103,46 @@ function ObjectDetailsTab({
});
setState("submitted");
setSearch({
...search,
plus_id: "new_upload",
});
mutate(
(key) =>
typeof key === "string" &&
(key.includes("events") ||
key.includes("events/search") ||
key.includes("events/explore")),
(currentData: SearchResult[][] | SearchResult[] | undefined) =>
mapSearchResults(currentData, (event) =>
event.id === search.id
? { ...event, plus_id: "new_upload" }
: event,
),
{
optimisticData: true,
rollbackOnError: true,
revalidate: false,
},
);
},
[search, setSearch],
[search, mutate, mapSearchResults],
);
const popoverContainerRef = useRef<HTMLDivElement | null>(null);
const canRegenerate = !!(
config?.cameras[search.camera].objects.genai.enabled && search.end_time
);
const showGenAIPlaceholder = !!(
config?.cameras[search.camera].objects.genai.enabled &&
!search.end_time &&
(config.cameras[search.camera].objects.genai.required_zones.length === 0 ||
search.zones.some((zone) =>
config.cameras[search.camera].objects.genai.required_zones.includes(
zone,
),
)) &&
(config.cameras[search.camera].objects.genai.objects.length === 0 ||
config.cameras[search.camera].objects.genai.objects.includes(
search.label,
))
);
return (
<div ref={popoverContainerRef} className="flex flex-col gap-5">
<div className="flex w-full flex-row">
@ -1101,7 +1157,7 @@ function ObjectDetailsTab({
</div>
<div className="flex flex-row items-center gap-2 text-sm smart-capitalize">
{getIconForLabel(search.label, "size-4 text-primary")}
{getTranslatedLabel(search.label)}
{getTranslatedLabel(search.label, search.data.type)}
{search.sub_label && ` (${search.sub_label})`}
{isAdmin && search.end_time && (
<Tooltip>
@ -1242,176 +1298,175 @@ function ObjectDetailsTab({
</div>
</div>
<div
className={cn(
"my-2 flex w-full flex-col justify-between gap-1.5",
state == "submitted" && "flex-row",
)}
>
<div className="text-sm text-primary/40">
<div className="flex flex-row items-center gap-1">
{t("explore.plus.submitToPlus.label", {
ns: "components/dialog",
})}
<Popover>
<PopoverTrigger asChild>
<div className="cursor-pointer p-0">
<LuInfo className="size-4" />
<span className="sr-only">Info</span>
</div>
</PopoverTrigger>
<PopoverContent
container={popoverContainerRef.current}
className="w-80 text-xs"
>
{t("explore.plus.submitToPlus.desc", {
{search.data.type === "object" &&
config?.plus?.enabled &&
search.has_snapshot && (
<div
className={cn(
"my-2 flex w-full flex-col justify-between gap-1.5",
state == "submitted" && "flex-row",
)}
>
<div className="text-sm text-primary/40">
<div className="flex flex-row items-center gap-1">
{t("explore.plus.submitToPlus.label", {
ns: "components/dialog",
})}
</PopoverContent>
</Popover>
</div>
</div>
<Popover>
<PopoverTrigger asChild>
<div className="cursor-pointer p-0">
<LuInfo className="size-4" />
<span className="sr-only">Info</span>
</div>
</PopoverTrigger>
<PopoverContent
container={popoverContainerRef.current}
className="w-80 text-xs"
>
{t("explore.plus.submitToPlus.desc", {
ns: "components/dialog",
})}
</PopoverContent>
</Popover>
</div>
</div>
<div className="flex flex-row items-center justify-between gap-2 text-sm">
{state == "reviewing" && (
<>
<div>
{i18n.language === "en" ? (
// English with a/an logic plus label
<>
{/^[aeiou]/i.test(search?.label || "") ? (
<Trans
ns="components/dialog"
values={{ label: search?.label }}
>
explore.plus.review.question.ask_an
</Trans>
<div className="flex flex-row items-center justify-between gap-2 text-sm">
{state == "reviewing" && (
<>
<div>
{i18n.language === "en" ? (
// English with a/an logic plus label
<>
{/^[aeiou]/i.test(search?.label || "") ? (
<Trans
ns="components/dialog"
values={{ label: search?.label }}
>
explore.plus.review.question.ask_an
</Trans>
) : (
<Trans
ns="components/dialog"
values={{ label: search?.label }}
>
explore.plus.review.question.ask_a
</Trans>
)}
</>
) : (
// For other languages
<Trans
ns="components/dialog"
values={{ label: search?.label }}
values={{
untranslatedLabel: search?.label,
translatedLabel: getTranslatedLabel(search?.label),
}}
>
explore.plus.review.question.ask_a
explore.plus.review.question.ask_full
</Trans>
)}
</>
) : (
// For other languages
<Trans
ns="components/dialog"
values={{
untranslatedLabel: search?.label,
translatedLabel: getTranslatedLabel(search?.label),
}}
>
explore.plus.review.question.ask_full
</Trans>
)}
</div>
<div className="flex max-w-xl flex-row gap-2">
<Button
className="flex-1 bg-success"
aria-label={t("button.yes", { ns: "common" })}
onClick={() => {
setState("uploading");
onSubmitToPlus(false);
}}
>
{t("button.yes", { ns: "common" })}
</Button>
<Button
className="flex-1 text-white"
aria-label={t("button.no", { ns: "common" })}
variant="destructive"
onClick={() => {
setState("uploading");
onSubmitToPlus(true);
}}
>
{t("button.no", { ns: "common" })}
</Button>
</div>
</>
)}
{state == "uploading" && <ActivityIndicator />}
{state == "submitted" && (
<div className="flex flex-row items-center justify-center gap-2">
<FaCheckCircle className="size-4 text-success" />
{t("explore.plus.review.state.submitted")}
</div>
)}
</div>
</div>
<div className="flex flex-col gap-1.5">
{config?.cameras[search.camera].objects.genai.enabled &&
!search.end_time &&
(config.cameras[search.camera].objects.genai.required_zones.length ===
0 ||
search.zones.some((zone) =>
config.cameras[search.camera].objects.genai.required_zones.includes(
zone,
),
)) &&
(config.cameras[search.camera].objects.genai.objects.length === 0 ||
config.cameras[search.camera].objects.genai.objects.includes(
search.label,
)) ? (
<>
<div className="text-sm text-primary/40">
{t("details.description.label")}
</div>
<div className="flex h-64 flex-col items-center justify-center gap-3 border p-4 text-sm text-primary/40">
<div className="flex">
<ActivityIndicator />
</div>
<div className="flex">{t("details.description.aiTips")}</div>
</div>
</>
) : (
<>
<div className="text-sm text-primary/40"></div>
<Textarea
className="text-md h-64"
placeholder={t("details.description.placeholder")}
value={desc}
onChange={(e) => setDesc(e.target.value)}
onFocus={handleDescriptionFocus}
onBlur={handleDescriptionBlur}
/>
</>
)}
<div className="flex w-full flex-row justify-end gap-2">
{config?.cameras[search?.camera].audio_transcription.enabled &&
search?.label == "speech" &&
search?.end_time && (
<Button onClick={onTranscribe}>
<div className="flex gap-1">
{t("itemMenu.audioTranscription.label")}
</div>
<div className="flex max-w-xl flex-row gap-2">
<Button
className="flex-1 bg-success"
aria-label={t("button.yes", { ns: "common" })}
onClick={() => {
setState("uploading");
onSubmitToPlus(false);
}}
>
{t("button.yes", { ns: "common" })}
</Button>
<Button
className="flex-1 text-white"
aria-label={t("button.no", { ns: "common" })}
variant="destructive"
onClick={() => {
setState("uploading");
onSubmitToPlus(true);
}}
>
{t("button.no", { ns: "common" })}
</Button>
</div>
</>
)}
{state == "uploading" && <ActivityIndicator />}
{state == "submitted" && (
<div className="flex flex-row items-center justify-center gap-2">
<FaCheckCircle className="size-4 text-success" />
{t("explore.plus.review.state.submitted", {
ns: "components/dialog",
})}
</div>
</Button>
)}
{config?.cameras[search.camera].objects.genai.enabled &&
search.end_time && (
<div className="flex items-start">
<Button
className="rounded-r-none border-r-0"
aria-label={t("details.button.regenerate.label")}
onClick={() => regenerateDescription("thumbnails")}
)}
</div>
</div>
)}
<div className="flex flex-col gap-1.5">
<div className="flex items-center justify-start gap-3">
<div className="text-sm text-primary/40">
{t("details.description.label")}
</div>
<div className="flex items-center gap-3">
<Tooltip>
<TooltipTrigger asChild>
<button
aria-label={t("button.edit", { ns: "common" })}
className="text-primary/40 hover:text-primary/80"
onClick={() => {
originalDescRef.current = desc ?? "";
setIsEditingDesc(true);
}}
>
{t("details.button.regenerate.title")}
</Button>
{search.has_snapshot && (
<DropdownMenu>
<DropdownMenuTrigger asChild>
<Button
className="rounded-l-none border-l-0 px-2"
aria-label={t("details.expandRegenerationMenu")}
>
<FaChevronDown className="size-3" />
</Button>
</DropdownMenuTrigger>
<DropdownMenuContent>
<FaPencilAlt className="size-4" />
</button>
</TooltipTrigger>
<TooltipContent>
{t("button.edit", { ns: "common" })}
</TooltipContent>
</Tooltip>
{config?.cameras[search?.camera].audio_transcription.enabled &&
search?.label == "speech" &&
search?.end_time && (
<Tooltip>
<TooltipTrigger asChild>
<button
aria-label={t("itemMenu.audioTranscription.label")}
className="text-primary/40 hover:text-primary/80"
onClick={onTranscribe}
>
<FaMicrophone className="size-4" />
</button>
</TooltipTrigger>
<TooltipContent>
{t("itemMenu.audioTranscription.label")}
</TooltipContent>
</Tooltip>
)}
{canRegenerate && (
<div className="relative">
<DropdownMenu>
<Tooltip>
<TooltipTrigger asChild>
<DropdownMenuTrigger asChild>
<button
aria-label={t("details.button.regenerate.label")}
className="text-primary/40 hover:text-primary/80"
>
<HiSparkles className="size-4" />
</button>
</DropdownMenuTrigger>
</TooltipTrigger>
<TooltipContent>
{t("details.button.regenerate.title")}
</TooltipContent>
</Tooltip>
<DropdownMenuContent>
{search.has_snapshot && (
<DropdownMenuItem
className="cursor-pointer"
aria-label={t("details.regenerateFromSnapshot")}
@ -1419,61 +1474,115 @@ function ObjectDetailsTab({
>
{t("details.regenerateFromSnapshot")}
</DropdownMenuItem>
<DropdownMenuItem
className="cursor-pointer"
aria-label={t("details.regenerateFromThumbnails")}
onClick={() => regenerateDescription("thumbnails")}
>
{t("details.regenerateFromThumbnails")}
</DropdownMenuItem>
</DropdownMenuContent>
</DropdownMenu>
)}
)}
<DropdownMenuItem
className="cursor-pointer"
aria-label={t("details.regenerateFromThumbnails")}
onClick={() => regenerateDescription("thumbnails")}
>
{t("details.regenerateFromThumbnails")}
</DropdownMenuItem>
</DropdownMenuContent>
</DropdownMenu>
</div>
)}
{((config?.cameras[search.camera].objects.genai.enabled &&
search.end_time) ||
!config?.cameras[search.camera].objects.genai.enabled) && (
<Button
variant="select"
aria-label={t("button.save", { ns: "common" })}
onClick={updateDescription}
>
{t("button.save", { ns: "common" })}
</Button>
)}
<TextEntryDialog
open={isSubLabelDialogOpen}
setOpen={setIsSubLabelDialogOpen}
title={t("details.editSubLabel.title")}
description={
search.label
? t("details.editSubLabel.desc", {
label: search.label,
})
: t("details.editSubLabel.descNoLabel")
}
onSave={handleSubLabelSave}
defaultValue={search?.sub_label || ""}
allowEmpty={true}
/>
<TextEntryDialog
open={isLPRDialogOpen}
setOpen={setIsLPRDialogOpen}
title={t("details.editLPR.title")}
description={
search.label
? t("details.editLPR.desc", {
label: search.label,
})
: t("details.editLPR.descNoLabel")
}
onSave={handleLPRSave}
defaultValue={search?.data.recognized_license_plate || ""}
allowEmpty={true}
/>
</div>
</div>
{!isEditingDesc ? (
showGenAIPlaceholder ? (
<div className="flex h-32 flex-col items-center justify-center gap-3 border p-4 text-sm text-primary/40">
<div className="flex">
<ActivityIndicator />
</div>
<div className="flex">{t("details.description.aiTips")}</div>
</div>
) : (
<div className="overflow-auto text-sm text-primary">
{desc || t("label.none", { ns: "common" })}
</div>
)
) : (
<div className="flex flex-col gap-2">
<Textarea
className="text-md h-32 md:text-sm"
placeholder={t("details.description.placeholder")}
value={desc}
onChange={(e) => setDesc(e.target.value)}
onFocus={handleDescriptionFocus}
onBlur={handleDescriptionBlur}
autoFocus
/>
<div className="mb-10 flex flex-row justify-end gap-5">
<Tooltip>
<TooltipTrigger asChild>
<button
aria-label={t("button.cancel", { ns: "common" })}
className="text-primary/40 hover:text-primary"
onClick={() => {
setIsEditingDesc(false);
setDesc(originalDescRef.current ?? "");
}}
>
<FaTimes className="size-5" />
</button>
</TooltipTrigger>
<TooltipContent>
{t("button.cancel", { ns: "common" })}
</TooltipContent>
</Tooltip>
<Tooltip>
<TooltipTrigger asChild>
<button
aria-label={t("button.save", { ns: "common" })}
className="text-primary/40 hover:text-primary/80"
onClick={() => {
setIsEditingDesc(false);
updateDescription();
}}
>
<FaCheck className="size-5" />
</button>
</TooltipTrigger>
<TooltipContent>
{t("button.save", { ns: "common" })}
</TooltipContent>
</Tooltip>
</div>
</div>
)}
<TextEntryDialog
open={isSubLabelDialogOpen}
setOpen={setIsSubLabelDialogOpen}
title={t("details.editSubLabel.title")}
description={
search.label
? t("details.editSubLabel.desc", {
label: search.label,
})
: t("details.editSubLabel.descNoLabel")
}
onSave={handleSubLabelSave}
defaultValue={search?.sub_label || ""}
allowEmpty={true}
/>
<TextEntryDialog
open={isLPRDialogOpen}
setOpen={setIsLPRDialogOpen}
title={t("details.editLPR.title")}
description={
search.label
? t("details.editLPR.desc", {
label: search.label,
})
: t("details.editLPR.descNoLabel")
}
onSave={handleLPRSave}
defaultValue={search?.data.recognized_license_plate || ""}
allowEmpty={true}
/>
</div>
</div>
);

View File

@ -1,5 +1,6 @@
import useSWR from "swr";
import { useCallback, useEffect, useMemo, useRef, useState } from "react";
import { useResizeObserver } from "@/hooks/resize-observer";
import { Event } from "@/types/event";
import ActivityIndicator from "@/components/indicators/activity-indicator";
import { TrackingDetailsSequence } from "@/types/timeline";
@ -11,7 +12,11 @@ import { cn } from "@/lib/utils";
import HlsVideoPlayer from "@/components/player/HlsVideoPlayer";
import { baseUrl } from "@/api/baseUrl";
import { REVIEW_PADDING } from "@/types/review";
import { ASPECT_VERTICAL_LAYOUT, ASPECT_WIDE_LAYOUT } from "@/types/record";
import {
ASPECT_VERTICAL_LAYOUT,
ASPECT_WIDE_LAYOUT,
Recording,
} from "@/types/record";
import {
DropdownMenu,
DropdownMenuTrigger,
@ -23,6 +28,7 @@ import { Link, useNavigate } from "react-router-dom";
import { getLifecycleItemDescription } from "@/utils/lifecycleUtil";
import { useTranslation } from "react-i18next";
import { getTranslatedLabel } from "@/utils/i18n";
import { resolveZoneName } from "@/hooks/use-zone-friendly-name";
import { Badge } from "@/components/ui/badge";
import { HiDotsHorizontal } from "react-icons/hi";
import axios from "axios";
@ -73,6 +79,145 @@ export function TrackingDetails({
const { data: config } = useSWR<FrigateConfig>("config");
// Fetch recording segments for the event's time range to handle motion-only gaps
const eventStartRecord = useMemo(
() => (event.start_time ?? 0) + annotationOffset / 1000,
[event.start_time, annotationOffset],
);
const eventEndRecord = useMemo(
() => (event.end_time ?? Date.now() / 1000) + annotationOffset / 1000,
[event.end_time, annotationOffset],
);
const { data: recordings } = useSWR<Recording[]>(
event.camera
? [
`${event.camera}/recordings`,
{
after: eventStartRecord - REVIEW_PADDING,
before: eventEndRecord + REVIEW_PADDING,
},
]
: null,
);
// Convert a timeline timestamp to actual video player time, accounting for
// motion-only recording gaps. Uses the same algorithm as DynamicVideoController.
const timestampToVideoTime = useCallback(
(timestamp: number): number => {
if (!recordings || recordings.length === 0) {
// Fallback to simple calculation if no recordings data
return timestamp - (eventStartRecord - REVIEW_PADDING);
}
const videoStartTime = eventStartRecord - REVIEW_PADDING;
// If timestamp is before video start, return 0
if (timestamp < videoStartTime) return 0;
// Check if timestamp is before the first recording or after the last
if (
timestamp < recordings[0].start_time ||
timestamp > recordings[recordings.length - 1].end_time
) {
// No recording available at this timestamp
return 0;
}
// Calculate the inpoint offset - the HLS video may start partway through the first segment
let inpointOffset = 0;
if (
videoStartTime > recordings[0].start_time &&
videoStartTime < recordings[0].end_time
) {
inpointOffset = videoStartTime - recordings[0].start_time;
}
let seekSeconds = 0;
for (const segment of recordings) {
// Skip segments that end before our timestamp
if (segment.end_time <= timestamp) {
// Add this segment's duration, but subtract inpoint offset from first segment
if (segment === recordings[0]) {
seekSeconds += segment.duration - inpointOffset;
} else {
seekSeconds += segment.duration;
}
} else if (segment.start_time <= timestamp) {
// The timestamp is within this segment
if (segment === recordings[0]) {
// For the first segment, account for the inpoint offset
seekSeconds +=
timestamp - Math.max(segment.start_time, videoStartTime);
} else {
seekSeconds += timestamp - segment.start_time;
}
break;
}
}
return seekSeconds;
},
[recordings, eventStartRecord],
);
// Convert video player time back to timeline timestamp, accounting for
// motion-only recording gaps. Reverse of timestampToVideoTime.
const videoTimeToTimestamp = useCallback(
(playerTime: number): number => {
if (!recordings || recordings.length === 0) {
// Fallback to simple calculation if no recordings data
const videoStartTime = eventStartRecord - REVIEW_PADDING;
return playerTime + videoStartTime;
}
const videoStartTime = eventStartRecord - REVIEW_PADDING;
// Calculate the inpoint offset - the video may start partway through the first segment
let inpointOffset = 0;
if (
videoStartTime > recordings[0].start_time &&
videoStartTime < recordings[0].end_time
) {
inpointOffset = videoStartTime - recordings[0].start_time;
}
let timestamp = 0;
let totalTime = 0;
for (const segment of recordings) {
const segmentDuration =
segment === recordings[0]
? segment.duration - inpointOffset
: segment.duration;
if (totalTime + segmentDuration > playerTime) {
// The player time is within this segment
if (segment === recordings[0]) {
// For the first segment, add the inpoint offset
timestamp =
Math.max(segment.start_time, videoStartTime) +
(playerTime - totalTime);
} else {
timestamp = segment.start_time + (playerTime - totalTime);
}
break;
} else {
totalTime += segmentDuration;
}
}
return timestamp;
},
[recordings, eventStartRecord],
);
eventSequence?.map((event) => {
event.data.zones_friendly_names = event.data?.zones?.map((zone) => {
return resolveZoneName(config, zone);
});
});
// Use manualOverride (set when seeking in image mode) if present so
// lifecycle rows and overlays follow image-mode seeks. Otherwise fall
// back to currentTime used for video mode.
@ -82,9 +227,16 @@ export function TrackingDetails({
}, [manualOverride, currentTime, annotationOffset]);
const containerRef = useRef<HTMLDivElement | null>(null);
const timelineContainerRef = useRef<HTMLDivElement | null>(null);
const rowRefs = useRef<(HTMLDivElement | null)[]>([]);
const [_selectedZone, setSelectedZone] = useState("");
const [_lifecycleZones, setLifecycleZones] = useState<string[]>([]);
const [seekToTimestamp, setSeekToTimestamp] = useState<number | null>(null);
const [lineBottomOffsetPx, setLineBottomOffsetPx] = useState<number>(32);
const [lineTopOffsetPx, setLineTopOffsetPx] = useState<number>(8);
const [blueLineHeightPx, setBlueLineHeightPx] = useState<number>(0);
const [timelineSize] = useResizeObserver(timelineContainerRef);
const aspectRatio = useMemo(() => {
if (!config) {
@ -133,17 +285,14 @@ export function TrackingDetails({
return;
}
// For video mode: convert to video-relative time and seek player
const eventStartRecord =
(event.start_time ?? 0) + annotationOffset / 1000;
const videoStartTime = eventStartRecord - REVIEW_PADDING;
const relativeTime = targetTimeRecord - videoStartTime;
// For video mode: convert to video-relative time (accounting for motion-only gaps)
const relativeTime = timestampToVideoTime(targetTimeRecord);
if (videoRef.current) {
videoRef.current.currentTime = relativeTime;
}
},
[event.start_time, annotationOffset, displaySource],
[annotationOffset, displaySource, timestampToVideoTime],
);
const formattedStart = config
@ -162,21 +311,22 @@ export function TrackingDetails({
})
: "";
const formattedEnd = config
? formatUnixTimestampToDateTime(event.end_time ?? 0, {
timezone: config.ui.timezone,
date_format:
config.ui.time_format == "24hour"
? t("time.formattedTimestamp.24hour", {
ns: "common",
})
: t("time.formattedTimestamp.12hour", {
ns: "common",
}),
time_style: "medium",
date_style: "medium",
})
: "";
const formattedEnd =
config && event.end_time != null
? formatUnixTimestampToDateTime(event.end_time, {
timezone: config.ui.timezone,
date_format:
config.ui.time_format == "24hour"
? t("time.formattedTimestamp.24hour", {
ns: "common",
})
: t("time.formattedTimestamp.12hour", {
ns: "common",
}),
time_style: "medium",
date_style: "medium",
})
: "";
useEffect(() => {
if (!eventSequence || eventSequence.length === 0) return;
@ -195,79 +345,83 @@ export function TrackingDetails({
}
// seekToTimestamp is a record stream timestamp
// event.start_time is detect stream time, convert to record
// The video clip starts at (eventStartRecord - REVIEW_PADDING)
// Convert to video position (accounting for motion-only recording gaps)
if (!videoRef.current) return;
const eventStartRecord = event.start_time + annotationOffset / 1000;
const videoStartTime = eventStartRecord - REVIEW_PADDING;
const relativeTime = seekToTimestamp - videoStartTime;
const relativeTime = timestampToVideoTime(seekToTimestamp);
if (relativeTime >= 0) {
videoRef.current.currentTime = relativeTime;
}
setSeekToTimestamp(null);
}, [
seekToTimestamp,
event.start_time,
annotationOffset,
apiHost,
event.camera,
displaySource,
]);
}, [seekToTimestamp, displaySource, timestampToVideoTime]);
const isWithinEventRange =
effectiveTime !== undefined &&
event.start_time !== undefined &&
event.end_time !== undefined &&
effectiveTime >= event.start_time &&
effectiveTime <= event.end_time;
// Calculate how far down the blue line should extend based on effectiveTime
const calculateLineHeight = useCallback(() => {
if (!eventSequence || eventSequence.length === 0 || !isWithinEventRange) {
return 0;
const isWithinEventRange = useMemo(() => {
if (effectiveTime === undefined || event.start_time === undefined) {
return false;
}
const currentTime = effectiveTime ?? 0;
// Find which events have been passed
let lastPassedIndex = -1;
for (let i = 0; i < eventSequence.length; i++) {
if (currentTime >= (eventSequence[i].timestamp ?? 0)) {
lastPassedIndex = i;
} else {
break;
// If an event has not ended yet, fall back to last timestamp in eventSequence
let eventEnd = event.end_time;
if (eventEnd == null && eventSequence && eventSequence.length > 0) {
const last = eventSequence[eventSequence.length - 1];
if (last && last.timestamp !== undefined) {
eventEnd = last.timestamp;
}
}
// No events passed yet
if (lastPassedIndex < 0) return 0;
if (eventEnd == null) {
return false;
}
return effectiveTime >= event.start_time && effectiveTime <= eventEnd;
}, [effectiveTime, event.start_time, event.end_time, eventSequence]);
// All events passed
if (lastPassedIndex >= eventSequence.length - 1) return 100;
// Dynamically compute pixel offsets so the timeline line starts at the
// first row midpoint and ends at the last row midpoint. For accuracy,
// measure the center Y of each lifecycle row and interpolate the current
// effective time into a pixel position; then set the blue line height
// so it reaches the center dot at the same time the dot becomes active.
useEffect(() => {
if (!timelineContainerRef.current || !eventSequence) return;
// Calculate percentage based on item position, not time
// Each item occupies an equal visual space regardless of time gaps
const itemPercentage = 100 / (eventSequence.length - 1);
const containerRect = timelineContainerRef.current.getBoundingClientRect();
const validRefs = rowRefs.current.filter((r) => r !== null);
if (validRefs.length === 0) return;
// Find progress between current and next event for smooth transition
const currentEvent = eventSequence[lastPassedIndex];
const nextEvent = eventSequence[lastPassedIndex + 1];
const currentTimestamp = currentEvent.timestamp ?? 0;
const nextTimestamp = nextEvent.timestamp ?? 0;
const centers = validRefs.map((n) => {
const r = n.getBoundingClientRect();
return r.top + r.height / 2 - containerRect.top;
});
// Calculate interpolation between the two events
const timeBetween = nextTimestamp - currentTimestamp;
const timeElapsed = currentTime - currentTimestamp;
const interpolation = timeBetween > 0 ? timeElapsed / timeBetween : 0;
// Base position plus interpolated progress to next item
return Math.min(
100,
lastPassedIndex * itemPercentage + interpolation * itemPercentage,
const topOffset = Math.max(0, centers[0]);
const bottomOffset = Math.max(
0,
containerRect.height - centers[centers.length - 1],
);
}, [eventSequence, effectiveTime, isWithinEventRange]);
const blueLineHeight = calculateLineHeight();
setLineTopOffsetPx(Math.round(topOffset));
setLineBottomOffsetPx(Math.round(bottomOffset));
const eff = effectiveTime ?? 0;
const timestamps = eventSequence.map((s) => s.timestamp ?? 0);
let pixelPos = centers[0];
if (eff <= timestamps[0]) {
pixelPos = centers[0];
} else if (eff >= timestamps[timestamps.length - 1]) {
pixelPos = centers[centers.length - 1];
} else {
for (let i = 0; i < timestamps.length - 1; i++) {
const t1 = timestamps[i];
const t2 = timestamps[i + 1];
if (eff >= t1 && eff <= t2) {
const ratio = t2 > t1 ? (eff - t1) / (t2 - t1) : 0;
pixelPos = centers[i] + ratio * (centers[i + 1] - centers[i]);
break;
}
}
}
const bluePx = Math.round(Math.max(0, pixelPos - topOffset));
setBlueLineHeightPx(bluePx);
}, [eventSequence, timelineSize.width, timelineSize.height, effectiveTime]);
const videoSource = useMemo(() => {
// event.start_time and event.end_time are in DETECT stream time
@ -305,14 +459,13 @@ export function TrackingDetails({
const handleTimeUpdate = useCallback(
(time: number) => {
// event.start_time is detect stream time, convert to record
const eventStartRecord = event.start_time + annotationOffset / 1000;
const videoStartTime = eventStartRecord - REVIEW_PADDING;
const absoluteTime = time + videoStartTime;
// Convert video player time back to timeline timestamp
// accounting for motion-only recording gaps
const absoluteTime = videoTimeToTimestamp(time);
setCurrentTime(absoluteTime);
},
[event.start_time, annotationOffset],
[videoTimeToTimestamp],
);
const [src, setSrc] = useState(
@ -336,6 +489,10 @@ export function TrackingDetails({
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [displayedRecordTime]);
const onUploadFrameToPlus = useCallback(() => {
return axios.post(`/${event.camera}/plus/${currentTime}`);
}, [event.camera, currentTime]);
if (!config) {
return <ActivityIndicator />;
}
@ -345,7 +502,8 @@ export function TrackingDetails({
className={cn(
isDesktop
? "flex size-full justify-evenly gap-4 overflow-hidden"
: "flex size-full flex-col gap-2",
: "flex flex-col gap-2",
!isDesktop && cameraAspect === "tall" && "size-full",
className,
)}
>
@ -380,6 +538,7 @@ export function TrackingDetails({
frigateControls={true}
onTimeUpdate={handleTimeUpdate}
onSeekToTime={handleSeekToTime}
onUploadFrame={onUploadFrameToPlus}
isDetailMode={true}
camera={event.camera}
currentTimeOverride={currentTime}
@ -446,7 +605,7 @@ export function TrackingDetails({
)}
>
{isDesktop && tabs && (
<div className="mb-4 flex items-center justify-between">
<div className="mb-2 flex items-center justify-between">
<div className="flex-1">{tabs}</div>
</div>
)}
@ -457,7 +616,7 @@ export function TrackingDetails({
>
{config?.cameras[event.camera]?.onvif.autotracking
.enabled_in_config && (
<div className="mb-2 text-sm text-danger">
<div className="mb-2 ml-3 text-sm text-danger">
{t("trackingDetails.autoTrackingTips")}
</div>
)}
@ -490,9 +649,16 @@ export function TrackingDetails({
</div>
<div className="flex items-center gap-2">
<span className="capitalize">{label}</span>
<span className="md:text-md text-xs text-secondary-foreground">
{formattedStart ?? ""} - {formattedEnd ?? ""}
</span>
<div className="md:text-md flex items-center text-xs text-secondary-foreground">
{formattedStart ?? ""}
{event.end_time != null ? (
<> - {formattedEnd}</>
) : (
<div className="inline-block">
<ActivityIndicator className="ml-3 size-4" />
</div>
)}
</div>
{event.data?.recognized_license_plate && (
<>
<span className="text-secondary-foreground">·</span>
@ -518,12 +684,21 @@ export function TrackingDetails({
{t("detail.noObjectDetailData", { ns: "views/events" })}
</div>
) : (
<div className="-pb-2 relative mx-0">
<div className="absolute -top-2 bottom-8 left-6 z-0 w-0.5 -translate-x-1/2 bg-secondary-foreground" />
<div
className="-pb-2 relative mx-0"
ref={timelineContainerRef}
>
<div
className="absolute -top-2 left-6 z-0 w-0.5 -translate-x-1/2 bg-secondary-foreground"
style={{ bottom: lineBottomOffsetPx }}
/>
{isWithinEventRange && (
<div
className="absolute left-6 top-2 z-[5] max-h-[calc(100%-3rem)] w-0.5 -translate-x-1/2 bg-selected transition-all duration-300"
style={{ height: `${blueLineHeight}%` }}
className="absolute left-6 z-[5] w-0.5 -translate-x-1/2 bg-selected transition-all duration-300"
style={{
top: `${lineTopOffsetPx}px`,
height: `${blueLineHeightPx}px`,
}}
/>
)}
<div className="space-y-2">
@ -576,20 +751,26 @@ export function TrackingDetails({
: undefined;
return (
<LifecycleIconRow
<div
key={`${item.timestamp}-${item.source_id ?? ""}-${idx}`}
item={item}
isActive={isActive}
formattedEventTimestamp={formattedEventTimestamp}
ratio={ratio}
areaPx={areaPx}
areaPct={areaPct}
onClick={() => handleLifecycleClick(item)}
setSelectedZone={setSelectedZone}
getZoneColor={getZoneColor}
effectiveTime={effectiveTime}
isTimelineActive={isWithinEventRange}
/>
ref={(el) => {
rowRefs.current[idx] = el;
}}
>
<LifecycleIconRow
item={item}
isActive={isActive}
formattedEventTimestamp={formattedEventTimestamp}
ratio={ratio}
areaPx={areaPx}
areaPct={areaPct}
onClick={() => handleLifecycleClick(item)}
setSelectedZone={setSelectedZone}
getZoneColor={getZoneColor}
effectiveTime={effectiveTime}
isTimelineActive={isWithinEventRange}
/>
</div>
);
})}
</div>
@ -712,8 +893,13 @@ function LifecycleIconRow({
backgroundColor: `rgb(${color})`,
}}
/>
<span className="smart-capitalize">
{zone.replaceAll("_", " ")}
<span
className={cn(
item.data?.zones_friendly_names?.[zidx] === zone &&
"smart-capitalize",
)}
>
{item.data?.zones_friendly_names?.[zidx]}
</span>
</Badge>
);

View File

@ -20,7 +20,9 @@ import {
SheetTitle,
SheetTrigger,
} from "@/components/ui/sheet";
import { cn } from "@/lib/utils";
import { isMobile } from "react-device-detect";
import { useRef } from "react";
type PlatformAwareDialogProps = {
trigger: JSX.Element;
@ -79,6 +81,8 @@ export function PlatformAwareSheet({
open,
onOpenChange,
}: PlatformAwareSheetProps) {
const scrollerRef = useRef<HTMLDivElement>(null);
if (isMobile) {
return (
<MobilePage open={open} onOpenChange={onOpenChange}>
@ -86,14 +90,22 @@ export function PlatformAwareSheet({
{trigger}
</MobilePageTrigger>
<MobilePagePortal>
<MobilePageContent className="h-full overflow-hidden">
<MobilePageContent
className="flex h-full flex-col"
scrollerRef={scrollerRef}
>
<MobilePageHeader
className="mx-2"
onClose={() => onOpenChange(false)}
>
<MobilePageTitle>{title}</MobilePageTitle>
</MobilePageHeader>
<div className={contentClassName}>{content}</div>
<div
ref={scrollerRef}
className={cn("flex-1 overflow-y-auto", contentClassName)}
>
{content}
</div>
</MobilePageContent>
</MobilePagePortal>
</MobilePage>

View File

@ -98,7 +98,11 @@ export default function RestartDialog({
open={restartingSheetOpen}
onOpenChange={() => setRestartingSheetOpen(false)}
>
<SheetContent side="top" onInteractOutside={(e) => e.preventDefault()}>
<SheetContent
side="top"
onInteractOutside={(e) => e.preventDefault()}
className="[&>button:first-of-type]:hidden"
>
<div className="flex flex-col items-center">
<ActivityIndicator />
<SheetHeader className="mt-5 text-center">

View File

@ -230,6 +230,7 @@ export default function SearchFilterDialog({
<PlatformAwareSheet
trigger={trigger}
title={t("more")}
titleClassName="mb-5 -mt-3"
content={content}
contentClassName={cn(
"w-auto lg:min-w-[275px] scrollbar-container h-full overflow-auto px-4",
@ -429,7 +430,8 @@ export function ZoneFilterContent({
{allZones.map((item) => (
<FilterSwitch
key={item}
label={item.replaceAll("_", " ")}
label={item}
type={"zone"}
isChecked={zones?.includes(item) ?? false}
onCheckedChange={(isChecked) => {
if (isChecked) {

View File

@ -77,7 +77,10 @@ export default function BirdseyeLivePlayer({
)}
onClick={onClick}
>
<ImageShadowOverlay />
<ImageShadowOverlay
upperClassName="md:rounded-2xl"
lowerClassName="md:rounded-2xl"
/>
<div className="size-full" ref={playerRef}>
{player}
</div>

View File

@ -318,6 +318,7 @@ export default function HlsVideoPlayer({
{isDetailMode &&
camera &&
currentTime &&
loadedMetadata &&
videoDimensions.width > 0 &&
videoDimensions.height > 0 && (
<div className="absolute z-50 size-full">

View File

@ -331,7 +331,10 @@ export default function LivePlayer({
>
{cameraEnabled &&
((showStillWithoutActivity && !liveReady) || liveReady) && (
<ImageShadowOverlay />
<ImageShadowOverlay
upperClassName="md:rounded-2xl"
lowerClassName="md:rounded-2xl"
/>
)}
{player}
{cameraEnabled &&

View File

@ -1,4 +1,5 @@
import { baseUrl } from "@/api/baseUrl";
import { usePersistence } from "@/hooks/use-persistence";
import {
LivePlayerError,
PlayerStatsType,
@ -71,6 +72,8 @@ function MSEPlayer({
const [errorCount, setErrorCount] = useState<number>(0);
const totalBytesLoaded = useRef(0);
const [fallbackTimeout] = usePersistence<number>("liveFallbackTimeout", 3);
const videoRef = useRef<HTMLVideoElement>(null);
const wsRef = useRef<WebSocket | null>(null);
const reconnectTIDRef = useRef<number | null>(null);
@ -475,7 +478,10 @@ function MSEPlayer({
setBufferTimeout(undefined);
}
const timeoutDuration = bufferTime == 0 ? 5000 : 3000;
const timeoutDuration =
bufferTime == 0
? (fallbackTimeout ?? 3) * 2 * 1000
: (fallbackTimeout ?? 3) * 1000;
setBufferTimeout(
setTimeout(() => {
if (
@ -500,6 +506,7 @@ function MSEPlayer({
onError,
onPlaying,
playbackEnabled,
fallbackTimeout,
]);
useEffect(() => {

View File

@ -18,7 +18,7 @@ import { z } from "zod";
import axios from "axios";
import { toast, Toaster } from "sonner";
import { useTranslation } from "react-i18next";
import { useState, useMemo } from "react";
import { useState, useMemo, useEffect } from "react";
import { LuTrash2, LuPlus } from "react-icons/lu";
import ActivityIndicator from "@/components/indicators/activity-indicator";
import { FrigateConfig } from "@/types/frigateConfig";
@ -42,7 +42,15 @@ export default function CameraEditForm({
onCancel,
}: CameraEditFormProps) {
const { t } = useTranslation(["views/settings"]);
const { data: config } = useSWR<FrigateConfig>("config");
const { data: config, mutate: mutateConfig } =
useSWR<FrigateConfig>("config");
const { data: rawPaths, mutate: mutateRawPaths } = useSWR<{
cameras: Record<
string,
{ ffmpeg: { inputs: { path: string; roles: string[] }[] } }
>;
go2rtc: { streams: Record<string, string | string[]> };
}>(cameraName ? "config/raw_paths" : null);
const [isLoading, setIsLoading] = useState(false);
const formSchema = useMemo(
@ -145,14 +153,23 @@ export default function CameraEditForm({
if (cameraName && config?.cameras[cameraName]) {
const camera = config.cameras[cameraName];
defaultValues.enabled = camera.enabled ?? true;
defaultValues.ffmpeg.inputs = camera.ffmpeg?.inputs?.length
? camera.ffmpeg.inputs.map((input) => ({
// Use raw paths from the admin endpoint if available, otherwise fall back to masked paths
const rawCameraData = rawPaths?.cameras?.[cameraName];
defaultValues.ffmpeg.inputs = rawCameraData?.ffmpeg?.inputs?.length
? rawCameraData.ffmpeg.inputs.map((input) => ({
path: input.path,
roles: input.roles as Role[],
}))
: defaultValues.ffmpeg.inputs;
: camera.ffmpeg?.inputs?.length
? camera.ffmpeg.inputs.map((input) => ({
path: input.path,
roles: input.roles as Role[],
}))
: defaultValues.ffmpeg.inputs;
const go2rtcStreams = config.go2rtc?.streams || {};
const go2rtcStreams =
rawPaths?.go2rtc?.streams || config.go2rtc?.streams || {};
const cameraStreams: Record<string, string[]> = {};
// get candidate stream names for this camera. could be the camera's own name,
@ -196,6 +213,60 @@ export default function CameraEditForm({
mode: "onChange",
});
// Update form values when rawPaths loads
useEffect(() => {
if (
cameraName &&
config?.cameras[cameraName] &&
rawPaths?.cameras?.[cameraName]
) {
const camera = config.cameras[cameraName];
const rawCameraData = rawPaths.cameras[cameraName];
// Update ffmpeg inputs with raw paths
if (rawCameraData.ffmpeg?.inputs?.length) {
form.setValue(
"ffmpeg.inputs",
rawCameraData.ffmpeg.inputs.map((input) => ({
path: input.path,
roles: input.roles as Role[],
})),
);
}
// Update go2rtc streams with raw URLs
if (rawPaths.go2rtc?.streams) {
const validNames = new Set<string>();
validNames.add(cameraName);
camera.ffmpeg?.inputs?.forEach((input) => {
const restreamMatch = input.path.match(
/^rtsp:\/\/127\.0\.0\.1:8554\/([^?#/]+)(?:[?#].*)?$/,
);
if (restreamMatch) {
validNames.add(restreamMatch[1]);
}
});
const liveStreams = camera?.live?.streams;
if (liveStreams) {
Object.keys(liveStreams).forEach((key) => validNames.add(key));
}
const cameraStreams: Record<string, string[]> = {};
Object.entries(rawPaths.go2rtc.streams).forEach(([name, urls]) => {
if (validNames.has(name)) {
cameraStreams[name] = Array.isArray(urls) ? urls : [urls];
}
});
if (Object.keys(cameraStreams).length > 0) {
form.setValue("go2rtcStreams", cameraStreams);
}
}
}
}, [cameraName, config, rawPaths, form]);
const { fields, append, remove } = useFieldArray({
control: form.control,
name: "ffmpeg.inputs",
@ -268,6 +339,8 @@ export default function CameraEditForm({
}),
{ position: "top-center" },
);
mutateConfig();
mutateRawPaths();
if (onSave) onSave();
});
} else {
@ -277,6 +350,8 @@ export default function CameraEditForm({
}),
{ position: "top-center" },
);
mutateConfig();
mutateRawPaths();
if (onSave) onSave();
}
} else {

View File

@ -12,15 +12,15 @@ import { toast } from "sonner";
import useSWR from "swr";
import axios from "axios";
import Step1NameCamera from "@/components/settings/wizard/Step1NameCamera";
import Step2StreamConfig from "@/components/settings/wizard/Step2StreamConfig";
import Step3Validation from "@/components/settings/wizard/Step3Validation";
import Step2ProbeOrSnapshot from "@/components/settings/wizard/Step2ProbeOrSnapshot";
import Step3StreamConfig from "@/components/settings/wizard/Step3StreamConfig";
import Step4Validation from "@/components/settings/wizard/Step4Validation";
import type {
WizardFormData,
CameraConfigData,
ConfigSetBody,
} from "@/types/cameraWizard";
import { processCameraName } from "@/utils/cameraUtil";
import { isDesktop } from "react-device-detect";
import { cn } from "@/lib/utils";
type WizardState = {
@ -57,6 +57,7 @@ const wizardReducer = (
const STEPS = [
"cameraWizard.steps.nameAndConnection",
"cameraWizard.steps.probeOrSnapshot",
"cameraWizard.steps.streamConfiguration",
"cameraWizard.steps.validationAndTesting",
];
@ -100,20 +101,20 @@ export default function CameraWizardDialog({
const canProceedToNext = useCallback((): boolean => {
switch (currentStep) {
case 0:
// Can proceed if camera name is set and at least one stream exists
return !!(
state.wizardData.cameraName &&
(state.wizardData.streams?.length ?? 0) > 0
);
// Step 1: Can proceed if camera name is set
return !!state.wizardData.cameraName;
case 1:
// Can proceed if at least one stream has 'detect' role
// Step 2: Can proceed if at least one stream exists (from probe or manual test)
return (state.wizardData.streams?.length ?? 0) > 0;
case 2:
// Step 3: Can proceed if at least one stream has 'detect' role
return !!(
state.wizardData.streams?.some((stream) =>
stream.roles.includes("detect"),
) ?? false
);
case 2:
// Always can proceed from final step (save will be handled there)
case 3:
// Step 4: Always can proceed from final step (save will be handled there)
return true;
default:
return false;
@ -340,13 +341,7 @@ export default function CameraWizardDialog({
<Dialog open={open} onOpenChange={handleClose}>
<DialogContent
className={cn(
"max-h-[90dvh] max-w-xl overflow-y-auto",
isDesktop &&
currentStep == 0 &&
state.wizardData?.streams?.[0]?.testResult?.snapshot &&
"max-w-4xl",
isDesktop && currentStep == 1 && "max-w-2xl",
isDesktop && currentStep > 1 && "max-w-4xl",
"scrollbar-container max-h-[90dvh] max-w-3xl overflow-y-auto",
)}
onInteractOutside={(e) => {
e.preventDefault();
@ -385,7 +380,16 @@ export default function CameraWizardDialog({
/>
)}
{currentStep === 1 && (
<Step2StreamConfig
<Step2ProbeOrSnapshot
wizardData={state.wizardData}
onUpdate={onUpdate}
onNext={handleNext}
onBack={handleBack}
probeMode={state.wizardData.probeMode ?? true}
/>
)}
{currentStep === 2 && (
<Step3StreamConfig
wizardData={state.wizardData}
onUpdate={onUpdate}
onBack={handleBack}
@ -393,8 +397,8 @@ export default function CameraWizardDialog({
canProceed={canProceedToNext()}
/>
)}
{currentStep === 2 && (
<Step3Validation
{currentStep === 3 && (
<Step4Validation
wizardData={state.wizardData}
onUpdate={onUpdate}
onSave={handleSave}

View File

@ -262,13 +262,17 @@ export function PolygonCanvas({
};
useEffect(() => {
if (activePolygonIndex === undefined || !polygons) {
if (activePolygonIndex === undefined || !polygons?.length) {
return;
}
const updatedPolygons = [...polygons];
const activePolygon = updatedPolygons[activePolygonIndex];
if (!activePolygon) {
return;
}
// add default points order for already completed polygons
if (!activePolygon.pointsOrder && activePolygon.isFinished) {
updatedPolygons[activePolygonIndex] = {

View File

@ -179,7 +179,7 @@ export default function PolygonItem({
if (res.status === 200) {
toast.success(
t("masksAndZones.form.polygonDrawing.delete.success", {
name: polygon?.name,
name: polygon?.friendly_name ?? polygon?.name,
}),
{
position: "top-center",
@ -261,7 +261,9 @@ export default function PolygonItem({
}}
/>
)}
<p className="cursor-default">{polygon.name}</p>
<p className="cursor-default">
{polygon.friendly_name ?? polygon.name}
</p>
</div>
<AlertDialog
open={deleteDialogOpen}
@ -278,7 +280,7 @@ export default function PolygonItem({
ns="views/settings"
values={{
type: polygon.type.replace("_", " "),
name: polygon.name,
name: polygon.friendly_name ?? polygon.name,
}}
>
masksAndZones.form.polygonDrawing.delete.desc

View File

@ -34,6 +34,7 @@ import { Link } from "react-router-dom";
import { LuExternalLink } from "react-icons/lu";
import { useDocDomain } from "@/hooks/use-doc-domain";
import { getTranslatedLabel } from "@/utils/i18n";
import NameAndIdFields from "../input/NameAndIdFields";
type ZoneEditPaneProps = {
polygons?: Polygon[];
@ -146,15 +147,37 @@ export default function ZoneEditPane({
"masksAndZones.form.zoneName.error.mustNotContainPeriod",
),
},
)
.refine((value: string) => /^[a-zA-Z0-9_-]+$/.test(value), {
message: t("masksAndZones.form.zoneName.error.hasIllegalCharacter"),
})
.refine((value: string) => /[a-zA-Z]/.test(value), {
),
friendly_name: z
.string()
.min(2, {
message: t(
"masksAndZones.form.zoneName.error.mustHaveAtLeastOneLetter",
"masksAndZones.form.zoneName.error.mustBeAtLeastTwoCharacters",
),
}),
})
.refine(
(value: string) => {
return !cameras.map((cam) => cam.name).includes(value);
},
{
message: t(
"masksAndZones.form.zoneName.error.mustNotBeSameWithCamera",
),
},
)
.refine(
(value: string) => {
const otherPolygonNames =
polygons
?.filter((_, index) => index !== activePolygonIndex)
.map((polygon) => polygon.name) || [];
return !otherPolygonNames.includes(value);
},
{
message: t("masksAndZones.form.zoneName.error.alreadyExists"),
},
),
inertia: z.coerce
.number()
.min(1, {
@ -247,6 +270,7 @@ export default function ZoneEditPane({
mode: "onBlur",
defaultValues: {
name: polygon?.name ?? "",
friendly_name: polygon?.friendly_name ?? polygon?.name ?? "",
inertia:
polygon?.camera &&
polygon?.name &&
@ -286,6 +310,7 @@ export default function ZoneEditPane({
async (
{
name: zoneName,
friendly_name,
inertia,
loitering_time,
objects: form_objects,
@ -415,9 +440,14 @@ export default function ZoneEditPane({
}
}
let friendlyNameQuery = "";
if (friendly_name) {
friendlyNameQuery = `&cameras.${polygon?.camera}.zones.${zoneName}.friendly_name=${encodeURIComponent(friendly_name)}`;
}
axios
.put(
`config/set?cameras.${polygon?.camera}.zones.${zoneName}.coordinates=${coordinates}${inertiaQuery}${loiteringTimeQuery}${speedThresholdQuery}${distancesQuery}${objectQueries}${alertQueries}${detectionQueries}`,
`config/set?cameras.${polygon?.camera}.zones.${zoneName}.coordinates=${coordinates}${inertiaQuery}${loiteringTimeQuery}${speedThresholdQuery}${distancesQuery}${objectQueries}${friendlyNameQuery}${alertQueries}${detectionQueries}`,
{
requires_restart: 0,
update_topic: `config/cameras/${polygon.camera}/zones`,
@ -427,7 +457,7 @@ export default function ZoneEditPane({
if (res.status === 200) {
toast.success(
t("masksAndZones.zones.toast.success", {
zoneName,
zoneName: friendly_name || zoneName,
}),
{
position: "top-center",
@ -541,26 +571,17 @@ export default function ZoneEditPane({
<Form {...form}>
<form onSubmit={form.handleSubmit(onSubmit)} className="mt-2 space-y-6">
<FormField
<NameAndIdFields
type="zone"
control={form.control}
name="name"
render={({ field }) => (
<FormItem>
<FormLabel>{t("masksAndZones.zones.name.title")}</FormLabel>
<FormControl>
<Input
className="text-md w-full border border-input bg-background p-2 hover:bg-accent hover:text-accent-foreground dark:[color-scheme:dark]"
placeholder={t("masksAndZones.zones.name.inputPlaceHolder")}
{...field}
/>
</FormControl>
<FormDescription>
{t("masksAndZones.zones.name.tips")}
</FormDescription>
<FormMessage />
</FormItem>
)}
nameField="friendly_name"
idField="name"
idVisible={(polygon && polygon.name.length > 0) ?? false}
nameLabel={t("masksAndZones.zones.name.title")}
nameDescription={t("masksAndZones.zones.name.tips")}
placeholderName={t("masksAndZones.zones.name.inputPlaceHolder")}
/>
<Separator className="my-2 flex bg-secondary" />
<FormField
control={form.control}

View File

@ -0,0 +1,363 @@
import { useTranslation } from "react-i18next";
import { Card, CardContent } from "@/components/ui/card";
import { Button } from "@/components/ui/button";
import ActivityIndicator from "@/components/indicators/activity-indicator";
import { FaCopy, FaCheck } from "react-icons/fa";
import { LuX } from "react-icons/lu";
import { CiCircleAlert } from "react-icons/ci";
import { Alert, AlertDescription, AlertTitle } from "@/components/ui/alert";
import { useState } from "react";
import { toast } from "sonner";
import type {
OnvifProbeResponse,
OnvifRtspCandidate,
TestResult,
CandidateTestMap,
} from "@/types/cameraWizard";
import { FaCircleCheck } from "react-icons/fa6";
import { cn } from "@/lib/utils";
import { maskUri } from "@/utils/cameraUtil";
type OnvifProbeResultsProps = {
isLoading: boolean;
isError: boolean;
error?: string;
probeResult?: OnvifProbeResponse;
onRetry: () => void;
selectedUris?: string[];
testCandidate?: (uri: string) => void;
candidateTests?: CandidateTestMap;
testingCandidates?: Record<string, boolean>;
};
export default function OnvifProbeResults({
isLoading,
isError,
error,
probeResult,
onRetry,
selectedUris,
testCandidate,
candidateTests,
testingCandidates,
}: OnvifProbeResultsProps) {
const { t } = useTranslation(["views/settings"]);
const [copiedUri, setCopiedUri] = useState<string | null>(null);
const handleCopyUri = (uri: string) => {
navigator.clipboard.writeText(uri);
setCopiedUri(uri);
toast.success(t("cameraWizard.step2.uriCopied"));
setTimeout(() => setCopiedUri(null), 2000);
};
if (isLoading) {
return (
<div className="flex flex-col items-center justify-center gap-4 py-8">
<ActivityIndicator className="size-6" />
<p className="text-sm text-muted-foreground">
{t("cameraWizard.step2.probingDevice")}
</p>
</div>
);
}
if (isError) {
return (
<div className="space-y-4">
<Alert variant="destructive">
<CiCircleAlert className="size-5" />
<AlertTitle>{t("cameraWizard.step2.probeError")}</AlertTitle>
{error && <AlertDescription>{error}</AlertDescription>}
</Alert>
<Button onClick={onRetry} variant="outline" className="w-full">
{t("button.retry", { ns: "common" })}
</Button>
</div>
);
}
if (!probeResult?.success) {
return (
<div className="space-y-4">
<Alert variant="destructive">
<CiCircleAlert className="size-5" />
<AlertTitle>{t("cameraWizard.step2.probeNoSuccess")}</AlertTitle>
{probeResult?.message && (
<AlertDescription>{probeResult.message}</AlertDescription>
)}
</Alert>
<Button onClick={onRetry} variant="outline" className="w-full">
{t("button.retry", { ns: "common" })}
</Button>
</div>
);
}
const rtspCandidates = (probeResult.rtsp_candidates || []).filter(
(c) => c.source === "GetStreamUri",
);
if (probeResult?.success && rtspCandidates.length === 0) {
return (
<div className="space-y-4">
<Alert variant="destructive">
<CiCircleAlert className="size-5" />
<AlertTitle>{t("cameraWizard.step2.noRtspCandidates")}</AlertTitle>
</Alert>
</div>
);
}
return (
<>
<div className="space-y-2">
{probeResult?.success && (
<div className="mb-3 flex flex-row items-center gap-2 text-sm text-success">
<FaCircleCheck className="size-4" />
<span>{t("cameraWizard.step2.probeSuccessful")}</span>
</div>
)}
<div className="text-sm">{t("cameraWizard.step2.deviceInfo")}</div>
<Card>
<CardContent className="space-y-2 p-4 text-sm">
{probeResult.manufacturer && (
<div>
<span className="text-muted-foreground">
{t("cameraWizard.step2.manufacturer")}:
</span>{" "}
<span className="text-primary-variant">
{probeResult.manufacturer}
</span>
</div>
)}
{probeResult.model && (
<div>
<span className="text-muted-foreground">
{t("cameraWizard.step2.model")}:
</span>{" "}
<span className="text-primary-variant">
{probeResult.model}
</span>
</div>
)}
{probeResult.firmware_version && (
<div>
<span className="text-muted-foreground">
{t("cameraWizard.step2.firmware")}:
</span>{" "}
<span className="text-primary-variant">
{probeResult.firmware_version}
</span>
</div>
)}
{probeResult.profiles_count !== undefined && (
<div>
<span className="text-muted-foreground">
{t("cameraWizard.step2.profiles")}:
</span>{" "}
<span className="text-primary-variant">
{probeResult.profiles_count}
</span>
</div>
)}
{probeResult.ptz_supported !== undefined && (
<div>
<span className="text-muted-foreground">
{t("cameraWizard.step2.ptzSupport")}:
</span>{" "}
<span className="text-primary-variant">
{probeResult.ptz_supported
? t("yes", { ns: "common" })
: t("no", { ns: "common" })}
</span>
</div>
)}
{probeResult.ptz_supported && probeResult.autotrack_supported && (
<div>
<span className="text-muted-foreground">
{t("cameraWizard.step2.autotrackingSupport")}:
</span>{" "}
<span className="text-primary-variant">
{t("yes", { ns: "common" })}
</span>
</div>
)}
{probeResult.ptz_supported &&
probeResult.presets_count !== undefined && (
<div>
<span className="text-muted-foreground">
{t("cameraWizard.step2.presets")}:
</span>{" "}
<span className="text-primary-variant">
{probeResult.presets_count}
</span>
</div>
)}
</CardContent>
</Card>
</div>
<div className="space-y-2">
{rtspCandidates.length > 0 && (
<div className="mt-5 space-y-2">
<div className="text-sm">
{t("cameraWizard.step2.rtspCandidates")}
</div>
<div className="text-sm text-muted-foreground">
{t("cameraWizard.step2.rtspCandidatesDescription")}
</div>
<div className="space-y-2">
{rtspCandidates.map((candidate, idx) => {
const isSelected = !!selectedUris?.includes(candidate.uri);
const candidateTest = candidateTests?.[candidate.uri];
const isTesting = testingCandidates?.[candidate.uri];
return (
<CandidateItem
key={idx}
index={idx}
candidate={candidate}
copiedUri={copiedUri}
onCopy={() => handleCopyUri(candidate.uri)}
isSelected={isSelected}
testCandidate={testCandidate}
candidateTest={candidateTest}
isTesting={isTesting}
/>
);
})}
</div>
</div>
)}
</div>
</>
);
}
type CandidateItemProps = {
candidate: OnvifRtspCandidate;
index?: number;
copiedUri: string | null;
onCopy: () => void;
isSelected?: boolean;
testCandidate?: (uri: string) => void;
candidateTest?: TestResult | { success: false; error: string };
isTesting?: boolean;
};
function CandidateItem({
index,
candidate,
copiedUri,
onCopy,
isSelected,
testCandidate,
candidateTest,
isTesting,
}: CandidateItemProps) {
const { t } = useTranslation(["views/settings"]);
const [showFull, setShowFull] = useState(false);
return (
<Card
className={cn(
isSelected &&
"outline outline-[3px] -outline-offset-[2.8px] outline-selected duration-200",
)}
>
<CardContent className="p-4">
<div className="flex flex-col space-y-4">
<div className="flex items-center justify-between">
<div>
<h4 className="font-medium">
{t("cameraWizard.step2.candidateStreamTitle", {
number: (index ?? 0) + 1,
})}
</h4>
{candidateTest?.success && (
<div className="mt-1 text-sm text-muted-foreground">
{[
candidateTest.resolution,
candidateTest.fps
? `${candidateTest.fps} ${t(
"cameraWizard.testResultLabels.fps",
)}`
: null,
candidateTest.videoCodec,
candidateTest.audioCodec,
]
.filter(Boolean)
.join(" · ")}
</div>
)}
</div>
<div className="flex flex-shrink-0 items-center gap-2">
{candidateTest?.success && (
<div className="flex items-center gap-2 text-sm">
<FaCircleCheck className="size-4 text-success" />
<span className="text-success">
{t("cameraWizard.step2.connected")}
</span>
</div>
)}
{candidateTest && !candidateTest.success && (
<div className="flex items-center gap-2 text-sm">
<LuX className="size-4 text-danger" />
<span className="text-danger">
{t("cameraWizard.step2.notConnected")}
</span>
</div>
)}
</div>
</div>
<div className="mt-1 flex items-start gap-2">
<p
className="flex-1 cursor-pointer break-all text-sm text-primary-variant hover:underline"
onClick={() => setShowFull((s) => !s)}
title={t("cameraWizard.step2.toggleUriView")}
>
{showFull ? candidate.uri : maskUri(candidate.uri)}
</p>
<div className="flex items-center gap-2">
<Button
size="sm"
variant="ghost"
onClick={onCopy}
className="mr-4 size-8 p-0"
title={t("cameraWizard.step2.uriCopy")}
>
{copiedUri === candidate.uri ? (
<FaCheck className="size-3" />
) : (
<FaCopy className="size-3" />
)}
</Button>
<Button
size="sm"
variant="outline"
disabled={isTesting}
onClick={() => testCandidate?.(candidate.uri)}
className="h-8 px-3 text-sm"
>
{isTesting ? (
<>
<ActivityIndicator className="mr-2 size-4" />{" "}
{t("cameraWizard.step2.testConnection")}
</>
) : (
t("cameraWizard.step2.testConnection")
)}
</Button>
</div>
</div>
</div>
</CardContent>
</Card>
);
}

View File

@ -2,11 +2,13 @@ import { Button } from "@/components/ui/button";
import {
Form,
FormControl,
FormDescription,
FormField,
FormItem,
FormLabel,
FormMessage,
} from "@/components/ui/form";
import { Checkbox } from "@/components/ui/checkbox";
import { Input } from "@/components/ui/input";
import {
Select,
@ -15,15 +17,13 @@ import {
SelectTrigger,
SelectValue,
} from "@/components/ui/select";
import { RadioGroup, RadioGroupItem } from "@/components/ui/radio-group";
import { useForm } from "react-hook-form";
import { zodResolver } from "@hookform/resolvers/zod";
import { z } from "zod";
import { useTranslation } from "react-i18next";
import { useState, useCallback, useMemo } from "react";
import { LuEye, LuEyeOff } from "react-icons/lu";
import ActivityIndicator from "@/components/indicators/activity-indicator";
import axios from "axios";
import { toast } from "sonner";
import useSWR from "swr";
import { FrigateConfig } from "@/types/frigateConfig";
import {
@ -31,20 +31,13 @@ import {
CameraBrand,
CAMERA_BRANDS,
CAMERA_BRAND_VALUES,
TestResult,
FfprobeStream,
StreamRole,
StreamConfig,
} from "@/types/cameraWizard";
import { FaCircleCheck } from "react-icons/fa6";
import { Card, CardContent, CardTitle } from "../../ui/card";
import {
Popover,
PopoverContent,
PopoverTrigger,
} from "@/components/ui/popover";
import { LuInfo } from "react-icons/lu";
import { detectReolinkCamera } from "@/utils/cameraUtil";
type Step1NameCameraProps = {
wizardData: Partial<WizardFormData>;
@ -63,9 +56,9 @@ export default function Step1NameCamera({
const { t } = useTranslation(["views/settings"]);
const { data: config } = useSWR<FrigateConfig>("config");
const [showPassword, setShowPassword] = useState(false);
const [isTesting, setIsTesting] = useState(false);
const [testStatus, setTestStatus] = useState<string>("");
const [testResult, setTestResult] = useState<TestResult | null>(null);
const [probeMode, setProbeMode] = useState<boolean>(
wizardData.probeMode ?? true,
);
const existingCameraNames = useMemo(() => {
if (!config?.cameras) {
@ -88,6 +81,8 @@ export default function Step1NameCamera({
username: z.string().optional(),
password: z.string().optional(),
brandTemplate: z.enum(CAMERA_BRAND_VALUES).optional(),
onvifPort: z.coerce.number().int().min(1).max(65535).optional(),
useDigestAuth: z.boolean().optional(),
customUrl: z
.string()
.optional()
@ -124,6 +119,8 @@ export default function Step1NameCamera({
? (wizardData.brandTemplate as CameraBrand)
: "dahua",
customUrl: wizardData.customUrl || "",
onvifPort: wizardData.onvifPort ?? 80,
useDigestAuth: wizardData.useDigestAuth ?? false,
},
mode: "onChange",
});
@ -132,271 +129,238 @@ export default function Step1NameCamera({
const watchedHost = form.watch("host");
const watchedCustomUrl = form.watch("customUrl");
const isTestButtonEnabled =
watchedBrand === "other"
? !!(watchedCustomUrl && watchedCustomUrl.trim())
: !!(watchedHost && watchedHost.trim());
const hostPresent = !!(watchedHost && watchedHost.trim());
const customPresent = !!(watchedCustomUrl && watchedCustomUrl.trim());
const cameraNamePresent = !!(form.getValues().cameraName || "").trim();
const generateDynamicStreamUrl = useCallback(
async (data: z.infer<typeof step1FormData>): Promise<string | null> => {
const brand = CAMERA_BRANDS.find((b) => b.value === data.brandTemplate);
if (!brand || !data.host) return null;
let protocol = undefined;
if (data.brandTemplate === "reolink" && data.username && data.password) {
try {
protocol = await detectReolinkCamera(
data.host,
data.username,
data.password,
);
} catch (error) {
return null;
}
}
// Use detected protocol or fallback to rtsp
const protocolKey = protocol || "rtsp";
const templates: Record<string, string> = brand.dynamicTemplates || {};
if (Object.keys(templates).includes(protocolKey)) {
const template =
templates[protocolKey as keyof typeof brand.dynamicTemplates];
return template
.replace("{username}", data.username || "")
.replace("{password}", data.password || "")
.replace("{host}", data.host);
}
return null;
},
[],
);
const generateStreamUrl = useCallback(
async (data: z.infer<typeof step1FormData>): Promise<string> => {
if (data.brandTemplate === "other") {
return data.customUrl || "";
}
const brand = CAMERA_BRANDS.find((b) => b.value === data.brandTemplate);
if (!brand || !data.host) return "";
if (brand.template === "dynamic" && "dynamicTemplates" in brand) {
const dynamicUrl = await generateDynamicStreamUrl(data);
if (dynamicUrl) {
return dynamicUrl;
}
return "";
}
return brand.template
.replace("{username}", data.username || "")
.replace("{password}", data.password || "")
.replace("{host}", data.host);
},
[generateDynamicStreamUrl],
);
const testConnection = useCallback(async () => {
const data = form.getValues();
const streamUrl = await generateStreamUrl(data);
if (!streamUrl) {
toast.error(t("cameraWizard.commonErrors.noUrl"));
return;
}
setIsTesting(true);
setTestStatus("");
setTestResult(null);
try {
// First get probe data for metadata
setTestStatus(t("cameraWizard.step1.testing.probingMetadata"));
const probeResponse = await axios.get("ffprobe", {
params: { paths: streamUrl, detailed: true },
timeout: 10000,
});
let probeData = null;
if (
probeResponse.data &&
probeResponse.data.length > 0 &&
probeResponse.data[0].return_code === 0
) {
probeData = probeResponse.data[0];
}
// Then get snapshot for preview (only if probe succeeded)
let snapshotBlob = null;
if (probeData) {
setTestStatus(t("cameraWizard.step1.testing.fetchingSnapshot"));
try {
const snapshotResponse = await axios.get("ffprobe/snapshot", {
params: { url: streamUrl },
responseType: "blob",
timeout: 10000,
});
snapshotBlob = snapshotResponse.data;
} catch (snapshotError) {
// Snapshot is optional, don't fail if it doesn't work
toast.warning(t("cameraWizard.step1.warnings.noSnapshot"));
}
}
if (probeData) {
const ffprobeData = probeData.stdout;
const streams = ffprobeData.streams || [];
const videoStream = streams.find(
(s: FfprobeStream) =>
s.codec_type === "video" ||
s.codec_name?.includes("h264") ||
s.codec_name?.includes("hevc"),
);
const audioStream = streams.find(
(s: FfprobeStream) =>
s.codec_type === "audio" ||
s.codec_name?.includes("aac") ||
s.codec_name?.includes("mp3") ||
s.codec_name?.includes("pcm_mulaw") ||
s.codec_name?.includes("pcm_alaw"),
);
const resolution = videoStream
? `${videoStream.width}x${videoStream.height}`
: undefined;
// Extract FPS from rational (e.g., "15/1" -> 15)
const fps = videoStream?.avg_frame_rate
? parseFloat(videoStream.avg_frame_rate.split("/")[0]) /
parseFloat(videoStream.avg_frame_rate.split("/")[1])
: undefined;
// Convert snapshot blob to base64 if available
let snapshotBase64 = undefined;
if (snapshotBlob) {
snapshotBase64 = await new Promise<string>((resolve) => {
const reader = new FileReader();
reader.onload = () => resolve(reader.result as string);
reader.readAsDataURL(snapshotBlob);
});
}
const testResult: TestResult = {
success: true,
snapshot: snapshotBase64,
resolution,
videoCodec: videoStream?.codec_name,
audioCodec: audioStream?.codec_name,
fps: fps && !isNaN(fps) ? fps : undefined,
};
setTestResult(testResult);
onUpdate({ streams: [{ id: "", url: "", roles: [], testResult }] });
toast.success(t("cameraWizard.step1.testSuccess"));
} else {
const error =
Array.isArray(probeResponse.data?.[0]?.stderr) &&
probeResponse.data[0].stderr.length > 0
? probeResponse.data[0].stderr.join("\n")
: "Unable to probe stream";
setTestResult({
success: false,
error: error,
});
toast.error(t("cameraWizard.commonErrors.testFailed", { error }), {
duration: 6000,
});
}
} catch (error) {
const axiosError = error as {
response?: { data?: { message?: string; detail?: string } };
message?: string;
};
const errorMessage =
axiosError.response?.data?.message ||
axiosError.response?.data?.detail ||
axiosError.message ||
"Connection failed";
setTestResult({
success: false,
error: errorMessage,
});
toast.error(
t("cameraWizard.commonErrors.testFailed", { error: errorMessage }),
{
duration: 10000,
},
);
} finally {
setIsTesting(false);
setTestStatus("");
}
}, [form, generateStreamUrl, t, onUpdate]);
const isContinueButtonEnabled =
cameraNamePresent &&
(probeMode
? hostPresent
: watchedBrand === "other"
? customPresent
: hostPresent);
const onSubmit = (data: z.infer<typeof step1FormData>) => {
onUpdate(data);
onUpdate({ ...data, probeMode });
};
const handleContinue = useCallback(async () => {
const data = form.getValues();
const streamUrl = await generateStreamUrl(data);
const streamId = `stream_${Date.now()}`;
const streamConfig: StreamConfig = {
id: streamId,
url: streamUrl,
roles: ["detect" as StreamRole],
resolution: testResult?.resolution,
testResult: testResult || undefined,
userTested: false,
};
const updatedData = {
...data,
streams: [streamConfig],
};
onNext(updatedData);
}, [form, generateStreamUrl, testResult, onNext]);
const isValid = await form.trigger();
if (isValid) {
const data = form.getValues();
onNext({ ...data, probeMode });
}
}, [form, probeMode, onNext]);
return (
<div className="space-y-6">
{!testResult?.success && (
<>
<div className="text-sm text-muted-foreground">
{t("cameraWizard.step1.description")}
</div>
<div className="text-sm text-muted-foreground">
{t("cameraWizard.step1.description")}
</div>
<Form {...form}>
<form onSubmit={form.handleSubmit(onSubmit)} className="space-y-4">
<FormField
control={form.control}
name="cameraName"
render={({ field }) => (
<FormItem>
<FormLabel className="text-primary-variant">
{t("cameraWizard.step1.cameraName")}
</FormLabel>
<FormControl>
<Form {...form}>
<form onSubmit={form.handleSubmit(onSubmit)} className="space-y-4">
<FormField
control={form.control}
name="cameraName"
render={({ field }) => (
<FormItem>
<FormLabel className="text-primary-variant">
{t("cameraWizard.step1.cameraName")}
</FormLabel>
<FormControl>
<Input
className="text-md h-8"
placeholder={t("cameraWizard.step1.cameraNamePlaceholder")}
{...field}
/>
</FormControl>
<FormMessage />
</FormItem>
)}
/>
<div className="space-y-4">
<FormField
control={form.control}
name="host"
render={({ field }) => (
<FormItem>
<FormLabel className="text-primary-variant">
{t("cameraWizard.step1.host")}
</FormLabel>
<FormControl>
<Input
className="text-md h-8"
placeholder="192.168.1.100"
{...field}
/>
</FormControl>
<FormMessage />
</FormItem>
)}
/>
<FormField
control={form.control}
name="username"
render={({ field }) => (
<FormItem>
<FormLabel className="text-primary-variant">
{t("cameraWizard.step1.username")}
</FormLabel>
<FormControl>
<Input
className="text-md h-8"
placeholder={t("cameraWizard.step1.usernamePlaceholder")}
{...field}
/>
</FormControl>
<FormMessage />
</FormItem>
)}
/>
<FormField
control={form.control}
name="password"
render={({ field }) => (
<FormItem>
<FormLabel className="text-primary-variant">
{t("cameraWizard.step1.password")}
</FormLabel>
<FormControl>
<div className="relative">
<Input
className="text-md h-8"
className="text-md h-8 pr-10"
type={showPassword ? "text" : "password"}
placeholder={t(
"cameraWizard.step1.cameraNamePlaceholder",
"cameraWizard.step1.passwordPlaceholder",
)}
{...field}
/>
</FormControl>
<FormMessage />
</FormItem>
)}
/>
<Button
type="button"
variant="ghost"
size="sm"
className="absolute right-0 top-0 h-full px-3 py-2 hover:bg-transparent"
onClick={() => setShowPassword(!showPassword)}
>
{showPassword ? (
<LuEyeOff className="size-4" />
) : (
<LuEye className="size-4" />
)}
</Button>
</div>
</FormControl>
<FormMessage />
</FormItem>
)}
/>
</div>
<div className="space-y-3 pt-4">
<FormLabel className="text-primary-variant">
{t("cameraWizard.step1.detectionMethod")}
</FormLabel>
<RadioGroup
value={probeMode ? "probe" : "manual"}
onValueChange={(value) => {
setProbeMode(value === "probe");
}}
>
<div className="flex items-center space-x-2">
<RadioGroupItem
value="probe"
id="probe-mode"
className={
probeMode
? "bg-selected from-selected/50 to-selected/90 text-selected"
: "bg-secondary from-secondary/50 to-secondary/90 text-secondary"
}
/>
<label htmlFor="probe-mode" className="cursor-pointer text-sm">
{t("cameraWizard.step1.probeMode")}
</label>
</div>
<div className="flex items-center space-x-2">
<RadioGroupItem
value="manual"
id="manual-mode"
className={
!probeMode
? "bg-selected from-selected/50 to-selected/90 text-selected"
: "bg-secondary from-secondary/50 to-secondary/90 text-secondary"
}
/>
<label htmlFor="manual-mode" className="cursor-pointer text-sm">
{t("cameraWizard.step1.manualMode")}
</label>
</div>
</RadioGroup>
<FormDescription>
{t("cameraWizard.step1.detectionMethodDescription")}
</FormDescription>
</div>
{probeMode && (
<FormField
control={form.control}
name="onvifPort"
render={({ field, fieldState }) => (
<FormItem>
<FormLabel className="text-primary-variant">
{t("cameraWizard.step1.onvifPort")}
</FormLabel>
<FormControl>
<Input
className="text-md h-8"
type="text"
{...field}
placeholder="80"
/>
</FormControl>
<FormDescription>
{t("cameraWizard.step1.onvifPortDescription")}
</FormDescription>
<FormMessage>
{fieldState.error ? fieldState.error.message : null}
</FormMessage>
</FormItem>
)}
/>
)}
{probeMode && (
<FormField
control={form.control}
name="useDigestAuth"
render={({ field }) => (
<FormItem className="flex items-start space-x-2">
<FormControl>
<Checkbox
className="size-5 text-white accent-white data-[state=checked]:bg-selected data-[state=checked]:text-white"
checked={!!field.value}
onCheckedChange={(val) => field.onChange(!!val)}
/>
</FormControl>
<div className="flex flex-1 flex-col space-y-1">
<FormLabel className="mb-0 text-primary-variant">
{t("cameraWizard.step1.useDigestAuth")}
</FormLabel>
<FormDescription className="mt-0">
{t("cameraWizard.step1.useDigestAuthDescription")}
</FormDescription>
</div>
</FormItem>
)}
/>
)}
{!probeMode && (
<div className="space-y-4">
<FormField
control={form.control}
name="brandTemplate"
@ -463,90 +427,6 @@ export default function Step1NameCamera({
)}
/>
{watchedBrand !== "other" && (
<>
<FormField
control={form.control}
name="host"
render={({ field }) => (
<FormItem>
<FormLabel className="text-primary-variant">
{t("cameraWizard.step1.host")}
</FormLabel>
<FormControl>
<Input
className="text-md h-8"
placeholder="192.168.1.100"
{...field}
/>
</FormControl>
<FormMessage />
</FormItem>
)}
/>
<FormField
control={form.control}
name="username"
render={({ field }) => (
<FormItem>
<FormLabel className="text-primary-variant">
{t("cameraWizard.step1.username")}
</FormLabel>
<FormControl>
<Input
className="text-md h-8"
placeholder={t(
"cameraWizard.step1.usernamePlaceholder",
)}
{...field}
/>
</FormControl>
<FormMessage />
</FormItem>
)}
/>
<FormField
control={form.control}
name="password"
render={({ field }) => (
<FormItem>
<FormLabel className="text-primary-variant">
{t("cameraWizard.step1.password")}
</FormLabel>
<FormControl>
<div className="relative">
<Input
className="text-md h-8 pr-10"
type={showPassword ? "text" : "password"}
placeholder={t(
"cameraWizard.step1.passwordPlaceholder",
)}
{...field}
/>
<Button
type="button"
variant="ghost"
size="sm"
className="absolute right-0 top-0 h-full px-3 py-2 hover:bg-transparent"
onClick={() => setShowPassword(!showPassword)}
>
{showPassword ? (
<LuEyeOff className="size-4" />
) : (
<LuEye className="size-4" />
)}
</Button>
</div>
</FormControl>
<FormMessage />
</FormItem>
)}
/>
</>
)}
{watchedBrand == "other" && (
<FormField
control={form.control}
@ -568,124 +448,25 @@ export default function Step1NameCamera({
)}
/>
)}
</form>
</Form>
</>
)}
</div>
)}
</form>
</Form>
{testResult?.success && (
<div className="p-4">
<div className="mb-3 flex flex-row items-center gap-2 text-sm font-medium text-success">
<FaCircleCheck className="size-4" />
{t("cameraWizard.step1.testSuccess")}
</div>
<div className="space-y-3">
{testResult.snapshot ? (
<div className="relative flex justify-center">
<img
src={testResult.snapshot}
alt="Camera snapshot"
className="max-h-[50dvh] max-w-full rounded-lg object-contain"
/>
<div className="absolute bottom-2 right-2 rounded-md bg-black/70 p-3 text-sm backdrop-blur-sm">
<div className="space-y-1">
<StreamDetails testResult={testResult} />
</div>
</div>
</div>
) : (
<Card className="p-4">
<CardTitle className="mb-2 text-sm">
{t("cameraWizard.step1.streamDetails")}
</CardTitle>
<CardContent className="p-0 text-sm">
<StreamDetails testResult={testResult} />
</CardContent>
</Card>
)}
</div>
</div>
)}
{isTesting && (
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<ActivityIndicator className="size-4" />
{testStatus}
</div>
)}
<div className="flex flex-col gap-3 pt-3 sm:flex-row sm:justify-end sm:gap-4">
<Button type="button" onClick={onCancel} className="sm:flex-1">
{t("button.cancel", { ns: "common" })}
</Button>
<Button
type="button"
onClick={testResult?.success ? () => setTestResult(null) : onCancel}
className="sm:flex-1"
onClick={handleContinue}
disabled={!isContinueButtonEnabled}
variant="select"
className="flex items-center justify-center gap-2 sm:flex-1"
>
{testResult?.success
? t("button.back", { ns: "common" })
: t("button.cancel", { ns: "common" })}
{t("button.continue", { ns: "common" })}
</Button>
{testResult?.success ? (
<Button
type="button"
onClick={handleContinue}
variant="select"
className="flex items-center justify-center gap-2 sm:flex-1"
>
{t("button.continue", { ns: "common" })}
</Button>
) : (
<Button
type="button"
onClick={testConnection}
disabled={isTesting || !isTestButtonEnabled}
variant="select"
className="flex items-center justify-center gap-2 sm:flex-1"
>
{t("cameraWizard.step1.testConnection")}
</Button>
)}
</div>
</div>
);
}
function StreamDetails({ testResult }: { testResult: TestResult }) {
const { t } = useTranslation(["views/settings"]);
return (
<>
{testResult.resolution && (
<div>
<span className="text-white/70">
{t("cameraWizard.testResultLabels.resolution")}:
</span>{" "}
<span className="text-white">{testResult.resolution}</span>
</div>
)}
{testResult.fps && (
<div>
<span className="text-white/70">
{t("cameraWizard.testResultLabels.fps")}:
</span>{" "}
<span className="text-white">{testResult.fps}</span>
</div>
)}
{testResult.videoCodec && (
<div>
<span className="text-white/70">
{t("cameraWizard.testResultLabels.video")}:
</span>{" "}
<span className="text-white">{testResult.videoCodec}</span>
</div>
)}
{testResult.audioCodec && (
<div>
<span className="text-white/70">
{t("cameraWizard.testResultLabels.audio")}:
</span>{" "}
<span className="text-white">{testResult.audioCodec}</span>
</div>
)}
</>
);
}

View File

@ -0,0 +1,725 @@
import { Button } from "@/components/ui/button";
import { useTranslation } from "react-i18next";
import { useState, useCallback, useEffect } from "react";
import ActivityIndicator from "@/components/indicators/activity-indicator";
import axios from "axios";
import { toast } from "sonner";
import type {
WizardFormData,
TestResult,
StreamConfig,
StreamRole,
OnvifProbeResponse,
CandidateTestMap,
FfprobeStream,
FfprobeData,
FfprobeResponse,
} from "@/types/cameraWizard";
import { FaCircleCheck } from "react-icons/fa6";
import { Card, CardContent, CardTitle } from "../../ui/card";
import OnvifProbeResults from "./OnvifProbeResults";
import { CAMERA_BRANDS } from "@/types/cameraWizard";
import { detectReolinkCamera } from "@/utils/cameraUtil";
type Step2ProbeOrSnapshotProps = {
wizardData: Partial<WizardFormData>;
onUpdate: (data: Partial<WizardFormData>) => void;
onNext: (data?: Partial<WizardFormData>) => void;
onBack: () => void;
probeMode: boolean;
};
export default function Step2ProbeOrSnapshot({
wizardData,
onUpdate,
onNext,
onBack,
probeMode,
}: Step2ProbeOrSnapshotProps) {
const { t } = useTranslation(["views/settings"]);
const [isTesting, setIsTesting] = useState(false);
const [testStatus, setTestStatus] = useState<string>("");
const [testResult, setTestResult] = useState<TestResult | null>(null);
const [isProbing, setIsProbing] = useState(false);
const [probeError, setProbeError] = useState<string | null>(null);
const [probeResult, setProbeResult] = useState<OnvifProbeResponse | null>(
null,
);
const [testingCandidates, setTestingCandidates] = useState<
Record<string, boolean>
>({} as Record<string, boolean>);
const [candidateTests, setCandidateTests] = useState<CandidateTestMap>(
{} as CandidateTestMap,
);
const probeUri = useCallback(
async (
uri: string,
fetchSnapshot = false,
setStatus?: (s: string) => void,
): Promise<TestResult> => {
try {
const probeResponse = await axios.get("ffprobe", {
params: { paths: uri, detailed: true },
timeout: 10000,
});
let probeData: FfprobeResponse | null = null;
if (
probeResponse.data &&
probeResponse.data.length > 0 &&
probeResponse.data[0].return_code === 0
) {
probeData = probeResponse.data[0];
}
if (!probeData) {
const error =
Array.isArray(probeResponse.data?.[0]?.stderr) &&
probeResponse.data[0].stderr.length > 0
? probeResponse.data[0].stderr.join("\n")
: "Unable to probe stream";
return { success: false, error };
}
let ffprobeData: FfprobeData;
if (typeof probeData.stdout === "string") {
try {
ffprobeData = JSON.parse(probeData.stdout as string) as FfprobeData;
} catch {
ffprobeData = { streams: [] };
}
} else {
ffprobeData = probeData.stdout as FfprobeData;
}
const streams = ffprobeData.streams || [];
const videoStream = streams.find(
(s: FfprobeStream) =>
s.codec_type === "video" ||
s.codec_name?.includes("h264") ||
s.codec_name?.includes("hevc"),
);
const audioStream = streams.find(
(s: FfprobeStream) =>
s.codec_type === "audio" ||
s.codec_name?.includes("aac") ||
s.codec_name?.includes("mp3") ||
s.codec_name?.includes("pcm_mulaw") ||
s.codec_name?.includes("pcm_alaw"),
);
let resolution: string | undefined = undefined;
if (videoStream) {
const width = Number(videoStream.width || 0);
const height = Number(videoStream.height || 0);
if (width > 0 && height > 0) {
resolution = `${width}x${height}`;
}
}
const fps = videoStream?.avg_frame_rate
? parseFloat(videoStream.avg_frame_rate.split("/")[0]) /
parseFloat(videoStream.avg_frame_rate.split("/")[1])
: undefined;
let snapshotBase64: string | undefined = undefined;
if (fetchSnapshot) {
if (setStatus) {
setStatus(t("cameraWizard.step2.testing.fetchingSnapshot"));
}
try {
const snapshotResponse = await axios.get("ffprobe/snapshot", {
params: { url: uri },
responseType: "blob",
timeout: 10000,
});
const snapshotBlob = snapshotResponse.data;
snapshotBase64 = await new Promise<string>((resolve) => {
const reader = new FileReader();
reader.onload = () => resolve(reader.result as string);
reader.readAsDataURL(snapshotBlob);
});
} catch (snapshotError) {
snapshotBase64 = undefined;
}
}
const streamTestResult: TestResult = {
success: true,
snapshot: snapshotBase64,
resolution,
videoCodec: videoStream?.codec_name,
audioCodec: audioStream?.codec_name,
fps: fps && !isNaN(fps) ? fps : undefined,
};
return streamTestResult;
} catch (err) {
const axiosError = err as {
response?: { data?: { message?: string; detail?: string } };
message?: string;
};
const errorMessage =
axiosError.response?.data?.message ||
axiosError.response?.data?.detail ||
axiosError.message ||
"Connection failed";
return { success: false, error: errorMessage };
}
},
[t],
);
const probeCamera = useCallback(async () => {
if (!wizardData.host) {
toast.error(t("cameraWizard.step2.errors.hostRequired"));
return;
}
setIsProbing(true);
setProbeError(null);
setProbeResult(null);
try {
const response = await axios.get("/onvif/probe", {
params: {
host: wizardData.host,
port: wizardData.onvifPort ?? 80,
username: wizardData.username || "",
password: wizardData.password || "",
test: false,
auth_type: wizardData.useDigestAuth ? "digest" : "basic",
},
timeout: 30000,
});
if (response.data && response.data.success) {
setProbeResult(response.data);
// Extract candidate URLs and pass to wizardData
const candidateUris = (response.data.rtsp_candidates || [])
.filter((c: { source: string }) => c.source === "GetStreamUri")
.map((c: { uri: string }) => c.uri);
onUpdate({
probeMode: true,
probeCandidates: candidateUris,
candidateTests: {},
});
} else {
setProbeError(response.data?.message || "Probe failed");
}
} catch (error) {
const axiosError = error as {
response?: { data?: { message?: string; detail?: string } };
message?: string;
};
const errorMessage =
axiosError.response?.data?.message ||
axiosError.response?.data?.detail ||
axiosError.message ||
"Failed to probe camera";
setProbeError(errorMessage);
toast.error(t("cameraWizard.step2.probeFailed", { error: errorMessage }));
} finally {
setIsProbing(false);
}
}, [wizardData, onUpdate, t]);
const testAllSelectedCandidates = useCallback(async () => {
const uris = (probeResult?.rtsp_candidates || [])
.filter((c) => c.source === "GetStreamUri")
.map((c) => c.uri);
if (!uris || uris.length === 0) {
toast.error(t("cameraWizard.commonErrors.noUrl"));
return;
}
// Prepare an initial stream so the wizard can proceed to step 3.
// Use the first candidate as the initial stream (no extra probing here).
const streamsToCreate: StreamConfig[] = [];
if (uris.length > 0) {
const first = uris[0];
streamsToCreate.push({
id: `stream_${Date.now()}`,
url: first,
roles: ["detect" as const],
testResult: candidateTests[first],
});
}
// Use existing candidateTests state (may contain entries from individual tests)
onNext({
probeMode: true,
probeCandidates: uris,
candidateTests: candidateTests,
streams: streamsToCreate,
});
}, [probeResult, candidateTests, onNext, t]);
const testCandidate = useCallback(
async (uri: string) => {
if (!uri) return;
setTestingCandidates((s) => ({ ...s, [uri]: true }));
try {
const result = await probeUri(uri, false);
setCandidateTests((s) => ({ ...s, [uri]: result }));
} finally {
setTestingCandidates((s) => ({ ...s, [uri]: false }));
}
},
[probeUri],
);
const generateDynamicStreamUrl = useCallback(
async (data: Partial<WizardFormData>): Promise<string | null> => {
const brand = CAMERA_BRANDS.find((b) => b.value === data.brandTemplate);
if (!brand || !data.host) return null;
let protocol = undefined;
if (data.brandTemplate === "reolink" && data.username && data.password) {
try {
protocol = await detectReolinkCamera(
data.host,
data.username,
data.password,
);
} catch (error) {
return null;
}
}
const protocolKey = protocol || "rtsp";
const templates: Record<string, string> = brand.dynamicTemplates || {};
if (Object.keys(templates).includes(protocolKey)) {
const template =
templates[protocolKey as keyof typeof brand.dynamicTemplates];
return template
.replace("{username}", data.username || "")
.replace("{password}", data.password || "")
.replace("{host}", data.host);
}
return null;
},
[],
);
const generateStreamUrl = useCallback(
async (data: Partial<WizardFormData>): Promise<string> => {
if (data.brandTemplate === "other") {
return data.customUrl || "";
}
const brand = CAMERA_BRANDS.find((b) => b.value === data.brandTemplate);
if (!brand || !data.host) return "";
if (brand.template === "dynamic" && "dynamicTemplates" in brand) {
const dynamicUrl = await generateDynamicStreamUrl(data);
if (dynamicUrl) {
return dynamicUrl;
}
return "";
}
return brand.template
.replace("{username}", data.username || "")
.replace("{password}", data.password || "")
.replace("{host}", data.host);
},
[generateDynamicStreamUrl],
);
const testConnection = useCallback(
async (showToast = true) => {
const streamUrl = await generateStreamUrl(wizardData);
if (!streamUrl) {
toast.error(t("cameraWizard.commonErrors.noUrl"));
return;
}
setIsTesting(true);
setTestStatus("");
setTestResult(null);
try {
setTestStatus(t("cameraWizard.step2.testing.probingMetadata"));
const result = await probeUri(streamUrl, true, setTestStatus);
if (result && result.success) {
setTestResult(result);
const streamId = `stream_${Date.now()}`;
onUpdate({
streams: [
{
id: streamId,
url: streamUrl,
roles: ["detect"] as StreamRole[],
testResult: result,
},
],
});
if (showToast) {
toast.success(t("cameraWizard.step2.testSuccess"));
}
} else {
const errMsg = result?.error || "Unable to probe stream";
setTestResult({
success: false,
error: errMsg,
});
if (showToast) {
toast.error(
t("cameraWizard.commonErrors.testFailed", { error: errMsg }),
{
duration: 6000,
},
);
}
}
} catch (error) {
const axiosError = error as {
response?: { data?: { message?: string; detail?: string } };
message?: string;
};
const errorMessage =
axiosError.response?.data?.message ||
axiosError.response?.data?.detail ||
axiosError.message ||
"Connection failed";
setTestResult({
success: false,
error: errorMessage,
});
if (showToast) {
toast.error(
t("cameraWizard.commonErrors.testFailed", { error: errorMessage }),
{
duration: 10000,
},
);
}
} finally {
setIsTesting(false);
setTestStatus("");
}
},
[wizardData, generateStreamUrl, t, onUpdate, probeUri],
);
const handleContinue = useCallback(() => {
onNext();
}, [onNext]);
// Auto-start probe or test when step loads
const [hasStarted, setHasStarted] = useState(false);
useEffect(() => {
if (!hasStarted) {
setHasStarted(true);
if (probeMode) {
probeCamera();
} else {
// Auto-run the connection test but suppress toasts to avoid duplicates
testConnection(false);
}
}
}, [hasStarted, probeMode, probeCamera, testConnection]);
return (
<div className="space-y-6">
{probeMode ? (
// Probe mode: show probe results directly
<>
{probeResult && (
<div className="space-y-4">
<OnvifProbeResults
isLoading={isProbing}
isError={!!probeError}
error={probeError || undefined}
probeResult={probeResult}
onRetry={probeCamera}
testCandidate={testCandidate}
candidateTests={candidateTests}
testingCandidates={testingCandidates}
/>
</div>
)}
<ProbeFooterButtons
isProbing={isProbing}
probeError={probeError}
onBack={onBack}
onTestAll={testAllSelectedCandidates}
onRetry={probeCamera}
// disable next if either the overall testConnection is running or any candidate test is running
isTesting={
isTesting || Object.values(testingCandidates).some((v) => v)
}
candidateCount={
(probeResult?.rtsp_candidates || []).filter(
(c) => c.source === "GetStreamUri",
).length
}
/>
</>
) : (
// Manual mode: show snapshot and stream details
<>
{testResult?.success && (
<div className="p-4">
<div className="mb-3 flex flex-row items-center gap-2 text-sm font-medium text-success">
<FaCircleCheck className="size-4" />
{t("cameraWizard.step2.testSuccess")}
</div>
<div className="space-y-3">
{testResult.snapshot ? (
<div className="relative flex justify-center">
<img
src={testResult.snapshot}
alt="Camera snapshot"
className="max-h-[50dvh] max-w-full rounded-lg object-contain"
/>
<div className="absolute bottom-2 right-2 rounded-md bg-black/70 p-3 text-sm backdrop-blur-sm">
<div className="space-y-1">
<StreamDetails testResult={testResult} />
</div>
</div>
</div>
) : (
<Card className="p-4">
<CardTitle className="mb-2 text-sm">
{t("cameraWizard.step2.streamDetails")}
</CardTitle>
<CardContent className="p-0 text-sm">
<StreamDetails testResult={testResult} />
</CardContent>
</Card>
)}
</div>
</div>
)}
{isTesting && (
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<ActivityIndicator className="size-4" />
{testStatus}
</div>
)}
{testResult && !testResult.success && (
<div className="space-y-4">
<div className="text-sm text-destructive">{testResult.error}</div>
</div>
)}
<ProbeFooterButtons
mode="manual"
isProbing={false}
probeError={null}
onBack={onBack}
onTestAll={testAllSelectedCandidates}
onRetry={probeCamera}
isTesting={
isTesting || Object.values(testingCandidates).some((v) => v)
}
candidateCount={
(probeResult?.rtsp_candidates || []).filter(
(c) => c.source === "GetStreamUri",
).length
}
manualTestSuccess={!!testResult?.success}
onContinue={handleContinue}
onManualTest={testConnection}
/>
</>
)}
</div>
);
}
function StreamDetails({ testResult }: { testResult: TestResult }) {
const { t } = useTranslation(["views/settings"]);
return (
<>
{testResult.resolution && (
<div>
<span className="text-white/70">
{t("cameraWizard.testResultLabels.resolution")}:
</span>{" "}
<span className="text-white">{testResult.resolution}</span>
</div>
)}
{testResult.fps && (
<div>
<span className="text-white/70">
{t("cameraWizard.testResultLabels.fps")}:
</span>{" "}
<span className="text-white">{testResult.fps}</span>
</div>
)}
{testResult.videoCodec && (
<div>
<span className="text-white/70">
{t("cameraWizard.testResultLabels.video")}:
</span>{" "}
<span className="text-white">{testResult.videoCodec}</span>
</div>
)}
{testResult.audioCodec && (
<div>
<span className="text-white/70">
{t("cameraWizard.testResultLabels.audio")}:
</span>{" "}
<span className="text-white">{testResult.audioCodec}</span>
</div>
)}
</>
);
}
type ProbeFooterProps = {
isProbing: boolean;
probeError: string | null;
onBack: () => void;
onTestAll: () => void;
onRetry: () => void;
isTesting: boolean;
candidateCount?: number;
mode?: "probe" | "manual";
manualTestSuccess?: boolean;
onContinue?: () => void;
onManualTest?: () => void;
};
function ProbeFooterButtons({
isProbing,
probeError,
onBack,
onTestAll,
onRetry,
isTesting,
candidateCount = 0,
mode = "probe",
manualTestSuccess,
onContinue,
onManualTest,
}: ProbeFooterProps) {
const { t } = useTranslation(["views/settings"]);
// Loading footer
if (isProbing) {
return (
<div className="space-y-4">
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<ActivityIndicator className="size-4" />
{t("cameraWizard.step2.probing")}
</div>
<div className="flex flex-col gap-3 pt-3 sm:flex-row sm:justify-end sm:gap-4">
<Button type="button" onClick={onBack} disabled className="sm:flex-1">
{t("button.back", { ns: "common" })}
</Button>
<Button
type="button"
disabled
variant="select"
className="flex items-center justify-center gap-2 sm:flex-1"
>
<ActivityIndicator className="size-4" />
{t("cameraWizard.step2.probing")}
</Button>
</div>
</div>
);
}
// Error footer
if (probeError) {
return (
<div className="space-y-4">
<div className="text-sm text-destructive">{probeError}</div>
<div className="flex flex-col gap-3 pt-3 sm:flex-row sm:justify-end sm:gap-4">
<Button type="button" onClick={onBack} className="sm:flex-1">
{t("button.back", { ns: "common" })}
</Button>
<Button
type="button"
onClick={onRetry}
variant="select"
className="flex items-center justify-center gap-2 sm:flex-1"
>
{t("cameraWizard.step2.retry")}
</Button>
</div>
</div>
);
}
// Default footer: show back + test (test disabled if none selected or testing)
// If manual mode, show Continue when test succeeded, otherwise show Test (calls onManualTest)
if (mode === "manual") {
return (
<div className="flex flex-col gap-3 pt-3 sm:flex-row sm:justify-end sm:gap-4">
<Button type="button" onClick={onBack} className="sm:flex-1">
{t("button.back", { ns: "common" })}
</Button>
{manualTestSuccess ? (
<Button
type="button"
onClick={onContinue}
variant="select"
className="flex items-center justify-center gap-2 sm:flex-1"
>
{t("button.continue", { ns: "common" })}
</Button>
) : (
<Button
type="button"
onClick={onManualTest}
disabled={isTesting}
variant="select"
className="flex items-center justify-center gap-2 sm:flex-1"
>
{isTesting ? (
<>
<ActivityIndicator className="size-4" />{" "}
{t("button.continue", { ns: "common" })}
</>
) : (
t("cameraWizard.step2.retry")
)}
</Button>
)}
</div>
);
}
// Default probe footer
return (
<div className="flex flex-col gap-3 pt-3 sm:flex-row sm:justify-end sm:gap-4">
<Button type="button" onClick={onBack} className="sm:flex-1">
{t("button.back", { ns: "common" })}
</Button>
<Button
type="button"
onClick={onTestAll}
disabled={isTesting || (candidateCount ?? 0) === 0}
variant="select"
className="flex items-center justify-center gap-2 sm:flex-1"
>
{t("button.next", { ns: "common" })}
</Button>
</div>
);
}

View File

@ -1,481 +0,0 @@
import { Button } from "@/components/ui/button";
import { Card, CardContent } from "@/components/ui/card";
import { Input } from "@/components/ui/input";
import { Switch } from "@/components/ui/switch";
import { useTranslation } from "react-i18next";
import { useState, useCallback, useMemo } from "react";
import { LuPlus, LuTrash2, LuX } from "react-icons/lu";
import ActivityIndicator from "@/components/indicators/activity-indicator";
import axios from "axios";
import { toast } from "sonner";
import {
WizardFormData,
StreamConfig,
StreamRole,
TestResult,
FfprobeStream,
} from "@/types/cameraWizard";
import { Label } from "../../ui/label";
import { FaCircleCheck } from "react-icons/fa6";
import {
Popover,
PopoverContent,
PopoverTrigger,
} from "@/components/ui/popover";
import { LuInfo, LuExternalLink } from "react-icons/lu";
import { Link } from "react-router-dom";
import { useDocDomain } from "@/hooks/use-doc-domain";
type Step2StreamConfigProps = {
wizardData: Partial<WizardFormData>;
onUpdate: (data: Partial<WizardFormData>) => void;
onBack?: () => void;
onNext?: () => void;
canProceed?: boolean;
};
export default function Step2StreamConfig({
wizardData,
onUpdate,
onBack,
onNext,
canProceed,
}: Step2StreamConfigProps) {
const { t } = useTranslation(["views/settings", "components/dialog"]);
const { getLocaleDocUrl } = useDocDomain();
const [testingStreams, setTestingStreams] = useState<Set<string>>(new Set());
const streams = useMemo(() => wizardData.streams || [], [wizardData.streams]);
const addStream = useCallback(() => {
const newStream: StreamConfig = {
id: `stream_${Date.now()}`,
url: "",
roles: [],
};
onUpdate({
streams: [...streams, newStream],
});
}, [streams, onUpdate]);
const removeStream = useCallback(
(streamId: string) => {
onUpdate({
streams: streams.filter((s) => s.id !== streamId),
});
},
[streams, onUpdate],
);
const updateStream = useCallback(
(streamId: string, updates: Partial<StreamConfig>) => {
onUpdate({
streams: streams.map((s) =>
s.id === streamId ? { ...s, ...updates } : s,
),
});
},
[streams, onUpdate],
);
const getUsedRolesExcludingStream = useCallback(
(excludeStreamId: string) => {
const roles = new Set<StreamRole>();
streams.forEach((stream) => {
if (stream.id !== excludeStreamId) {
stream.roles.forEach((role) => roles.add(role));
}
});
return roles;
},
[streams],
);
const toggleRole = useCallback(
(streamId: string, role: StreamRole) => {
const stream = streams.find((s) => s.id === streamId);
if (!stream) return;
const hasRole = stream.roles.includes(role);
if (hasRole) {
// Allow removing the role
const newRoles = stream.roles.filter((r) => r !== role);
updateStream(streamId, { roles: newRoles });
} else {
// Check if role is already used in another stream
const usedRoles = getUsedRolesExcludingStream(streamId);
if (!usedRoles.has(role)) {
// Allow adding the role
const newRoles = [...stream.roles, role];
updateStream(streamId, { roles: newRoles });
}
}
},
[streams, updateStream, getUsedRolesExcludingStream],
);
const testStream = useCallback(
(stream: StreamConfig) => {
if (!stream.url.trim()) {
toast.error(t("cameraWizard.commonErrors.noUrl"));
return;
}
setTestingStreams((prev) => new Set(prev).add(stream.id));
axios
.get("ffprobe", {
params: { paths: stream.url, detailed: true },
timeout: 10000,
})
.then((response) => {
if (response.data?.[0]?.return_code === 0) {
const probeData = response.data[0];
const streams = probeData.stdout.streams || [];
const videoStream = streams.find(
(s: FfprobeStream) =>
s.codec_type === "video" ||
s.codec_name?.includes("h264") ||
s.codec_name?.includes("h265"),
);
const audioStream = streams.find(
(s: FfprobeStream) =>
s.codec_type === "audio" ||
s.codec_name?.includes("aac") ||
s.codec_name?.includes("mp3"),
);
const resolution = videoStream
? `${videoStream.width}x${videoStream.height}`
: undefined;
const fps = videoStream?.avg_frame_rate
? parseFloat(videoStream.avg_frame_rate.split("/")[0]) /
parseFloat(videoStream.avg_frame_rate.split("/")[1])
: undefined;
const testResult: TestResult = {
success: true,
resolution,
videoCodec: videoStream?.codec_name,
audioCodec: audioStream?.codec_name,
fps: fps && !isNaN(fps) ? fps : undefined,
};
updateStream(stream.id, { testResult, userTested: true });
toast.success(t("cameraWizard.step2.testSuccess"));
} else {
const error = response.data?.[0]?.stderr || "Unknown error";
updateStream(stream.id, {
testResult: { success: false, error },
userTested: true,
});
toast.error(t("cameraWizard.commonErrors.testFailed", { error }));
}
})
.catch((error) => {
const errorMessage =
error.response?.data?.message ||
error.response?.data?.detail ||
"Connection failed";
updateStream(stream.id, {
testResult: { success: false, error: errorMessage },
userTested: true,
});
toast.error(
t("cameraWizard.commonErrors.testFailed", { error: errorMessage }),
);
})
.finally(() => {
setTestingStreams((prev) => {
const newSet = new Set(prev);
newSet.delete(stream.id);
return newSet;
});
});
},
[updateStream, t],
);
const setRestream = useCallback(
(streamId: string) => {
const stream = streams.find((s) => s.id === streamId);
if (!stream) return;
updateStream(streamId, { restream: !stream.restream });
},
[streams, updateStream],
);
const hasDetectRole = streams.some((s) => s.roles.includes("detect"));
return (
<div className="space-y-6">
<div className="text-sm text-secondary-foreground">
{t("cameraWizard.step2.description")}
</div>
<div className="space-y-4">
{streams.map((stream, index) => (
<Card key={stream.id} className="bg-secondary text-primary">
<CardContent className="space-y-4 p-4">
<div className="flex items-center justify-between">
<div>
<h4 className="font-medium">
{t("cameraWizard.step2.streamTitle", { number: index + 1 })}
</h4>
{stream.testResult && stream.testResult.success && (
<div className="mt-1 text-sm text-muted-foreground">
{[
stream.testResult.resolution,
stream.testResult.fps
? `${stream.testResult.fps} ${t("cameraWizard.testResultLabels.fps")}`
: null,
stream.testResult.videoCodec,
stream.testResult.audioCodec,
]
.filter(Boolean)
.join(" · ")}
</div>
)}
</div>
<div className="flex items-center gap-2">
{stream.testResult?.success && (
<div className="flex items-center gap-2 text-sm">
<FaCircleCheck className="size-4 text-success" />
<span className="text-success">
{t("cameraWizard.step2.connected")}
</span>
</div>
)}
{stream.testResult && !stream.testResult.success && (
<div className="flex items-center gap-2 text-sm">
<LuX className="size-4 text-danger" />
<span className="text-danger">
{t("cameraWizard.step2.notConnected")}
</span>
</div>
)}
{streams.length > 1 && (
<Button
variant="ghost"
size="sm"
onClick={() => removeStream(stream.id)}
className="text-secondary-foreground hover:text-secondary-foreground"
>
<LuTrash2 className="size-5" />
</Button>
)}
</div>
</div>
<div className="grid grid-cols-1 gap-4">
<div className="space-y-2">
<label className="text-sm font-medium text-primary-variant">
{t("cameraWizard.step2.url")}
</label>
<div className="flex flex-row items-center gap-2">
<Input
value={stream.url}
onChange={(e) =>
updateStream(stream.id, {
url: e.target.value,
testResult: undefined,
})
}
className="h-8 flex-1"
placeholder={t("cameraWizard.step2.streamUrlPlaceholder")}
/>
<Button
type="button"
onClick={() => testStream(stream)}
disabled={
testingStreams.has(stream.id) || !stream.url.trim()
}
variant="outline"
size="sm"
>
{testingStreams.has(stream.id) && (
<ActivityIndicator className="mr-2 size-4" />
)}
{t("cameraWizard.step2.testStream")}
</Button>
</div>
</div>
</div>
{stream.testResult &&
!stream.testResult.success &&
stream.userTested && (
<div className="rounded-md border border-danger/20 bg-danger/10 p-3 text-sm text-danger">
<div className="font-medium">
{t("cameraWizard.step2.testFailedTitle")}
</div>
<div className="mt-1 text-xs">
{stream.testResult.error}
</div>
</div>
)}
<div className="space-y-2">
<div className="flex items-center gap-1">
<Label className="text-sm font-medium text-primary-variant">
{t("cameraWizard.step2.roles")}
</Label>
<Popover>
<PopoverTrigger asChild>
<Button variant="ghost" size="sm" className="h-4 w-4 p-0">
<LuInfo className="size-3" />
</Button>
</PopoverTrigger>
<PopoverContent className="pointer-events-auto w-80 text-xs">
<div className="space-y-2">
<div className="font-medium">
{t("cameraWizard.step2.rolesPopover.title")}
</div>
<div className="space-y-1 text-muted-foreground">
<div>
<strong>detect</strong> -{" "}
{t("cameraWizard.step2.rolesPopover.detect")}
</div>
<div>
<strong>record</strong> -{" "}
{t("cameraWizard.step2.rolesPopover.record")}
</div>
<div>
<strong>audio</strong> -{" "}
{t("cameraWizard.step2.rolesPopover.audio")}
</div>
</div>
<div className="mt-3 flex items-center text-primary">
<Link
to={getLocaleDocUrl("configuration/cameras")}
target="_blank"
rel="noopener noreferrer"
className="inline"
>
{t("readTheDocumentation", { ns: "common" })}
<LuExternalLink className="ml-2 inline-flex size-3" />
</Link>
</div>
</div>
</PopoverContent>
</Popover>
</div>
<div className="rounded-lg bg-background p-3">
<div className="flex flex-wrap gap-2">
{(["detect", "record", "audio"] as const).map((role) => {
const isUsedElsewhere = getUsedRolesExcludingStream(
stream.id,
).has(role);
const isChecked = stream.roles.includes(role);
return (
<div
key={role}
className="flex w-full items-center justify-between"
>
<span className="text-sm capitalize">{role}</span>
<Switch
checked={isChecked}
onCheckedChange={() => toggleRole(stream.id, role)}
disabled={!isChecked && isUsedElsewhere}
/>
</div>
);
})}
</div>
</div>
</div>
<div className="space-y-2">
<div className="flex items-center gap-1">
<Label className="text-sm font-medium text-primary-variant">
{t("cameraWizard.step2.featuresTitle")}
</Label>
<Popover>
<PopoverTrigger asChild>
<Button variant="ghost" size="sm" className="h-4 w-4 p-0">
<LuInfo className="size-3" />
</Button>
</PopoverTrigger>
<PopoverContent className="pointer-events-auto w-80 text-xs">
<div className="space-y-2">
<div className="font-medium">
{t("cameraWizard.step2.featuresPopover.title")}
</div>
<div className="text-muted-foreground">
{t("cameraWizard.step2.featuresPopover.description")}
</div>
<div className="mt-3 flex items-center text-primary">
<Link
to={getLocaleDocUrl(
"configuration/restream#reduce-connections-to-camera",
)}
target="_blank"
rel="noopener noreferrer"
className="inline"
>
{t("readTheDocumentation", { ns: "common" })}
<LuExternalLink className="ml-2 inline-flex size-3" />
</Link>
</div>
</div>
</PopoverContent>
</Popover>
</div>
<div className="rounded-lg bg-background p-3">
<div className="flex items-center justify-between">
<span className="text-sm">
{t("cameraWizard.step2.go2rtc")}
</span>
<Switch
checked={stream.restream || false}
onCheckedChange={() => setRestream(stream.id)}
/>
</div>
</div>
</div>
</CardContent>
</Card>
))}
<Button
type="button"
onClick={addStream}
variant="outline"
className=""
>
<LuPlus className="mr-2 size-4" />
{t("cameraWizard.step2.addAnotherStream")}
</Button>
</div>
{!hasDetectRole && (
<div className="rounded-lg border border-danger/50 p-3 text-sm text-danger">
{t("cameraWizard.step2.detectRoleWarning")}
</div>
)}
<div className="flex flex-col gap-3 pt-6 sm:flex-row sm:justify-end sm:gap-4">
{onBack && (
<Button type="button" onClick={onBack} className="sm:flex-1">
{t("button.back", { ns: "common" })}
</Button>
)}
{onNext && (
<Button
type="button"
onClick={() => onNext?.()}
disabled={!canProceed}
variant="select"
className="sm:flex-1"
>
{t("button.next", { ns: "common" })}
</Button>
)}
</div>
</div>
);
}

View File

@ -0,0 +1,757 @@
import { Button } from "@/components/ui/button";
import { Card, CardContent } from "@/components/ui/card";
import { Input } from "@/components/ui/input";
import { Switch } from "@/components/ui/switch";
import { useTranslation } from "react-i18next";
import { useState, useCallback, useMemo } from "react";
import { LuPlus, LuTrash2, LuX } from "react-icons/lu";
import ActivityIndicator from "@/components/indicators/activity-indicator";
import axios from "axios";
import { toast } from "sonner";
import {
WizardFormData,
StreamConfig,
StreamRole,
TestResult,
FfprobeStream,
FfprobeData,
FfprobeResponse,
CandidateTestMap,
} from "@/types/cameraWizard";
import { Label } from "../../ui/label";
import { FaCircleCheck } from "react-icons/fa6";
import {
Popover,
PopoverContent,
PopoverTrigger,
} from "@/components/ui/popover";
import { Drawer, DrawerContent, DrawerTrigger } from "@/components/ui/drawer";
import { isMobile } from "react-device-detect";
import {
LuInfo,
LuExternalLink,
LuCheck,
LuChevronsUpDown,
} from "react-icons/lu";
import { Link } from "react-router-dom";
import { useDocDomain } from "@/hooks/use-doc-domain";
import { cn } from "@/lib/utils";
import {
Command,
CommandEmpty,
CommandGroup,
CommandInput,
CommandItem,
CommandList,
} from "@/components/ui/command";
type Step3StreamConfigProps = {
wizardData: Partial<WizardFormData>;
onUpdate: (data: Partial<WizardFormData>) => void;
onBack?: () => void;
onNext?: () => void;
canProceed?: boolean;
};
export default function Step3StreamConfig({
wizardData,
onUpdate,
onBack,
onNext,
canProceed,
}: Step3StreamConfigProps) {
const { t } = useTranslation(["views/settings", "components/dialog"]);
const { getLocaleDocUrl } = useDocDomain();
const [testingStreams, setTestingStreams] = useState<Set<string>>(new Set());
const [openCombobox, setOpenCombobox] = useState<string | null>(null);
const streams = useMemo(() => wizardData.streams || [], [wizardData.streams]);
// Probe mode candidate tracking
const probeCandidates = useMemo(
() => (wizardData.probeCandidates || []) as string[],
[wizardData.probeCandidates],
);
const candidateTests = useMemo(
() => (wizardData.candidateTests || {}) as CandidateTestMap,
[wizardData.candidateTests],
);
const isProbeMode = !!wizardData.probeMode;
const addStream = useCallback(() => {
const newStreamId = `stream_${Date.now()}`;
let initialUrl = "";
if (isProbeMode && probeCandidates.length > 0) {
// pick first candidate not already used
const used = new Set(streams.map((s) => s.url).filter(Boolean));
const firstAvailable = probeCandidates.find((c) => !used.has(c));
if (firstAvailable) {
initialUrl = firstAvailable;
}
}
const newStream: StreamConfig = {
id: newStreamId,
url: initialUrl,
roles: [],
testResult: initialUrl ? candidateTests[initialUrl] : undefined,
userTested: initialUrl ? !!candidateTests[initialUrl] : false,
};
onUpdate({
streams: [...streams, newStream],
});
}, [streams, onUpdate, isProbeMode, probeCandidates, candidateTests]);
const removeStream = useCallback(
(streamId: string) => {
onUpdate({
streams: streams.filter((s) => s.id !== streamId),
});
},
[streams, onUpdate],
);
const updateStream = useCallback(
(streamId: string, updates: Partial<StreamConfig>) => {
onUpdate({
streams: streams.map((s) =>
s.id === streamId ? { ...s, ...updates } : s,
),
});
},
[streams, onUpdate],
);
const getUsedRolesExcludingStream = useCallback(
(excludeStreamId: string) => {
const roles = new Set<StreamRole>();
streams.forEach((stream) => {
if (stream.id !== excludeStreamId) {
stream.roles.forEach((role) => roles.add(role));
}
});
return roles;
},
[streams],
);
const getUsedUrlsExcludingStream = useCallback(
(excludeStreamId: string) => {
const used = new Set<string>();
streams.forEach((s) => {
if (s.id !== excludeStreamId && s.url) {
used.add(s.url);
}
});
return used;
},
[streams],
);
const toggleRole = useCallback(
(streamId: string, role: StreamRole) => {
const stream = streams.find((s) => s.id === streamId);
if (!stream) return;
const hasRole = stream.roles.includes(role);
if (hasRole) {
// Allow removing the role
const newRoles = stream.roles.filter((r) => r !== role);
updateStream(streamId, { roles: newRoles });
} else {
// Check if role is already used in another stream
const usedRoles = getUsedRolesExcludingStream(streamId);
if (!usedRoles.has(role)) {
// Allow adding the role
const newRoles = [...stream.roles, role];
updateStream(streamId, { roles: newRoles });
}
}
},
[streams, updateStream, getUsedRolesExcludingStream],
);
const testStream = useCallback(
async (stream: StreamConfig) => {
if (!stream.url.trim()) {
toast.error(t("cameraWizard.commonErrors.noUrl"));
return;
}
setTestingStreams((prev) => new Set(prev).add(stream.id));
try {
const response = await axios.get("ffprobe", {
params: { paths: stream.url, detailed: true },
timeout: 10000,
});
let probeData: FfprobeResponse | null = null;
if (
response.data &&
response.data.length > 0 &&
response.data[0].return_code === 0
) {
probeData = response.data[0];
}
if (!probeData) {
const error =
Array.isArray(response.data?.[0]?.stderr) &&
response.data[0].stderr.length > 0
? response.data[0].stderr.join("\n")
: "Unable to probe stream";
const failResult: TestResult = { success: false, error };
updateStream(stream.id, { testResult: failResult, userTested: true });
onUpdate({
candidateTests: {
...(wizardData.candidateTests || {}),
[stream.url]: failResult,
} as CandidateTestMap,
});
toast.error(t("cameraWizard.commonErrors.testFailed", { error }));
return;
}
let ffprobeData: FfprobeData;
if (typeof probeData.stdout === "string") {
try {
ffprobeData = JSON.parse(probeData.stdout as string) as FfprobeData;
} catch {
ffprobeData = { streams: [] } as FfprobeData;
}
} else {
ffprobeData = probeData.stdout as FfprobeData;
}
const streamsArr = ffprobeData.streams || [];
const videoStream = streamsArr.find(
(s: FfprobeStream) =>
s.codec_type === "video" ||
s.codec_name?.includes("h264") ||
s.codec_name?.includes("hevc"),
);
const audioStream = streamsArr.find(
(s: FfprobeStream) =>
s.codec_type === "audio" ||
s.codec_name?.includes("aac") ||
s.codec_name?.includes("mp3") ||
s.codec_name?.includes("pcm_mulaw") ||
s.codec_name?.includes("pcm_alaw"),
);
let resolution: string | undefined = undefined;
if (videoStream) {
const width = Number(videoStream.width || 0);
const height = Number(videoStream.height || 0);
if (width > 0 && height > 0) {
resolution = `${width}x${height}`;
}
}
const fps = videoStream?.avg_frame_rate
? parseFloat(videoStream.avg_frame_rate.split("/")[0]) /
parseFloat(videoStream.avg_frame_rate.split("/")[1])
: undefined;
const testResult: TestResult = {
success: true,
resolution,
videoCodec: videoStream?.codec_name,
audioCodec: audioStream?.codec_name,
fps: fps && !isNaN(fps) ? fps : undefined,
};
updateStream(stream.id, { testResult, userTested: true });
onUpdate({
candidateTests: {
...(wizardData.candidateTests || {}),
[stream.url]: testResult,
} as CandidateTestMap,
});
toast.success(t("cameraWizard.step3.testSuccess"));
} catch (error) {
const axiosError = error as {
response?: { data?: { message?: string; detail?: string } };
message?: string;
};
const errorMessage =
axiosError.response?.data?.message ||
axiosError.response?.data?.detail ||
axiosError.message ||
"Connection failed";
const catchResult: TestResult = {
success: false,
error: errorMessage,
};
updateStream(stream.id, { testResult: catchResult, userTested: true });
onUpdate({
candidateTests: {
...(wizardData.candidateTests || {}),
[stream.url]: catchResult,
} as CandidateTestMap,
});
toast.error(
t("cameraWizard.commonErrors.testFailed", { error: errorMessage }),
);
} finally {
setTestingStreams((prev) => {
const newSet = new Set(prev);
newSet.delete(stream.id);
return newSet;
});
}
},
[updateStream, t, onUpdate, wizardData.candidateTests],
);
const setRestream = useCallback(
(streamId: string) => {
const stream = streams.find((s) => s.id === streamId);
if (!stream) return;
updateStream(streamId, { restream: !stream.restream });
},
[streams, updateStream],
);
const hasDetectRole = streams.some((s) => s.roles.includes("detect"));
return (
<div className="space-y-6">
<div className="text-sm text-secondary-foreground">
{t("cameraWizard.step3.description")}
</div>
<div className="space-y-4">
{streams.map((stream, index) => (
<Card key={stream.id} className="bg-secondary text-primary">
<CardContent className="space-y-4 p-4">
<div className="flex items-center justify-between">
<div>
<h4 className="font-medium">
{t("cameraWizard.step3.streamTitle", { number: index + 1 })}
</h4>
{stream.testResult && stream.testResult.success && (
<div className="mt-1 text-sm text-muted-foreground">
{[
stream.testResult.resolution,
stream.testResult.fps
? `${stream.testResult.fps} ${t("cameraWizard.testResultLabels.fps")}`
: null,
stream.testResult.videoCodec,
stream.testResult.audioCodec,
]
.filter(Boolean)
.join(" · ")}
</div>
)}
</div>
<div className="flex items-center gap-2">
{stream.testResult?.success && (
<div className="flex items-center gap-2 text-sm">
<FaCircleCheck className="size-4 text-success" />
<span className="text-success">
{t("cameraWizard.step3.connected")}
</span>
</div>
)}
{stream.testResult && !stream.testResult.success && (
<div className="flex items-center gap-2 text-sm">
<LuX className="size-4 text-danger" />
<span className="text-danger">
{t("cameraWizard.step3.notConnected")}
</span>
</div>
)}
{streams.length > 1 && (
<Button
variant="ghost"
size="sm"
onClick={() => removeStream(stream.id)}
className="text-secondary-foreground hover:text-secondary-foreground"
>
<LuTrash2 className="size-5" />
</Button>
)}
</div>
</div>
<div className="grid grid-cols-1 gap-4">
<div className="space-y-2">
<label className="text-sm font-medium text-primary-variant">
{t("cameraWizard.step3.url")}
</label>
<div className="flex flex-row items-center gap-2">
{isProbeMode && probeCandidates.length > 0 ? (
// Responsive: Popover on desktop, Drawer on mobile
!isMobile ? (
<Popover
open={openCombobox === stream.id}
onOpenChange={(isOpen) => {
setOpenCombobox(isOpen ? stream.id : null);
}}
>
<PopoverTrigger asChild>
<div className="min-w-0 flex-1">
<Button
variant="outline"
role="combobox"
aria-expanded={openCombobox === stream.id}
className="h-8 w-full justify-between overflow-hidden text-left"
>
<span className="truncate">
{stream.url
? stream.url
: t("cameraWizard.step3.selectStream")}
</span>
<LuChevronsUpDown className="ml-2 size-6 opacity-50" />
</Button>
</div>
</PopoverTrigger>
<PopoverContent
className="w-[--radix-popover-trigger-width] p-2"
disablePortal
>
<Command>
<CommandInput
placeholder={t(
"cameraWizard.step3.searchCandidates",
)}
className="h-9"
/>
<CommandList>
<CommandEmpty>
{t("cameraWizard.step3.noStreamFound")}
</CommandEmpty>
<CommandGroup>
{probeCandidates
.filter((c) => {
const used = getUsedUrlsExcludingStream(
stream.id,
);
return !used.has(c);
})
.map((candidate) => (
<CommandItem
key={candidate}
value={candidate}
onSelect={() => {
updateStream(stream.id, {
url: candidate,
testResult:
candidateTests[candidate] ||
undefined,
userTested:
!!candidateTests[candidate],
});
setOpenCombobox(null);
}}
>
<LuCheck
className={cn(
"mr-3 size-5",
stream.url === candidate
? "opacity-100"
: "opacity-0",
)}
/>
{candidate}
</CommandItem>
))}
</CommandGroup>
</CommandList>
</Command>
</PopoverContent>
</Popover>
) : (
<Drawer
open={openCombobox === stream.id}
onOpenChange={(isOpen) =>
setOpenCombobox(isOpen ? stream.id : null)
}
>
<DrawerTrigger asChild>
<div className="min-w-0 flex-1">
<Button
variant="outline"
role="combobox"
aria-expanded={openCombobox === stream.id}
className="h-8 w-full justify-between overflow-hidden text-left"
>
<span className="truncate">
{stream.url
? stream.url
: t("cameraWizard.step3.selectStream")}
</span>
<LuChevronsUpDown className="ml-2 size-6 opacity-50" />
</Button>
</div>
</DrawerTrigger>
<DrawerContent className="mx-1 max-h-[75dvh] overflow-hidden rounded-t-2xl px-2">
<div className="mt-2">
<Command>
<CommandInput
placeholder={t(
"cameraWizard.step3.searchCandidates",
)}
className="h-9"
/>
<CommandList>
<CommandEmpty>
{t("cameraWizard.step3.noStreamFound")}
</CommandEmpty>
<CommandGroup>
{probeCandidates
.filter((c) => {
const used = getUsedUrlsExcludingStream(
stream.id,
);
return !used.has(c);
})
.map((candidate) => (
<CommandItem
key={candidate}
value={candidate}
onSelect={() => {
updateStream(stream.id, {
url: candidate,
testResult:
candidateTests[candidate] ||
undefined,
userTested:
!!candidateTests[candidate],
});
setOpenCombobox(null);
}}
>
<LuCheck
className={cn(
"mr-3 size-5",
stream.url === candidate
? "opacity-100"
: "opacity-0",
)}
/>
{candidate}
</CommandItem>
))}
</CommandGroup>
</CommandList>
</Command>
</div>
</DrawerContent>
</Drawer>
)
) : (
<Input
value={stream.url}
onChange={(e) =>
updateStream(stream.id, {
url: e.target.value,
testResult: undefined,
})
}
className="h-8 flex-1"
placeholder={t(
"cameraWizard.step3.streamUrlPlaceholder",
)}
/>
)}
<Button
type="button"
onClick={() => testStream(stream)}
disabled={
testingStreams.has(stream.id) || !stream.url.trim()
}
variant="outline"
size="sm"
>
{testingStreams.has(stream.id) && (
<ActivityIndicator className="mr-2 size-4" />
)}
{t("cameraWizard.step3.testStream")}
</Button>
</div>
</div>
</div>
{stream.testResult &&
!stream.testResult.success &&
stream.userTested && (
<div className="rounded-md border border-danger/20 bg-danger/10 p-3 text-sm text-danger">
<div className="font-medium">
{t("cameraWizard.step3.testFailedTitle")}
</div>
<div className="mt-1 text-xs">
{stream.testResult.error}
</div>
</div>
)}
<div className="space-y-2">
<div className="flex items-center gap-1">
<Label className="text-sm font-medium text-primary-variant">
{t("cameraWizard.step3.roles")}
</Label>
<Popover>
<PopoverTrigger asChild>
<Button variant="ghost" size="sm" className="h-4 w-4 p-0">
<LuInfo className="size-3" />
</Button>
</PopoverTrigger>
<PopoverContent className="pointer-events-auto w-80 text-xs">
<div className="space-y-2">
<div className="font-medium">
{t("cameraWizard.step3.rolesPopover.title")}
</div>
<div className="space-y-1 text-muted-foreground">
<div>
<strong>detect</strong> -{" "}
{t("cameraWizard.step3.rolesPopover.detect")}
</div>
<div>
<strong>record</strong> -{" "}
{t("cameraWizard.step3.rolesPopover.record")}
</div>
<div>
<strong>audio</strong> -{" "}
{t("cameraWizard.step3.rolesPopover.audio")}
</div>
</div>
<div className="mt-3 flex items-center text-primary">
<Link
to={getLocaleDocUrl("configuration/cameras")}
target="_blank"
rel="noopener noreferrer"
className="inline"
>
{t("readTheDocumentation", { ns: "common" })}
<LuExternalLink className="ml-2 inline-flex size-3" />
</Link>
</div>
</div>
</PopoverContent>
</Popover>
</div>
<div className="rounded-lg bg-background p-3">
<div className="flex flex-wrap gap-2">
{(["detect", "record", "audio"] as const).map((role) => {
const isUsedElsewhere = getUsedRolesExcludingStream(
stream.id,
).has(role);
const isChecked = stream.roles.includes(role);
return (
<div
key={role}
className="flex w-full items-center justify-between"
>
<span className="text-sm capitalize">{role}</span>
<Switch
checked={isChecked}
onCheckedChange={() => toggleRole(stream.id, role)}
disabled={!isChecked && isUsedElsewhere}
/>
</div>
);
})}
</div>
</div>
</div>
<div className="space-y-2">
<div className="flex items-center gap-1">
<Label className="text-sm font-medium text-primary-variant">
{t("cameraWizard.step3.featuresTitle")}
</Label>
<Popover>
<PopoverTrigger asChild>
<Button variant="ghost" size="sm" className="h-4 w-4 p-0">
<LuInfo className="size-3" />
</Button>
</PopoverTrigger>
<PopoverContent className="pointer-events-auto w-80 text-xs">
<div className="space-y-2">
<div className="font-medium">
{t("cameraWizard.step3.featuresPopover.title")}
</div>
<div className="text-muted-foreground">
{t("cameraWizard.step3.featuresPopover.description")}
</div>
<div className="mt-3 flex items-center text-primary">
<Link
to={getLocaleDocUrl(
"configuration/restream#reduce-connections-to-camera",
)}
target="_blank"
rel="noopener noreferrer"
className="inline"
>
{t("readTheDocumentation", { ns: "common" })}
<LuExternalLink className="ml-2 inline-flex size-3" />
</Link>
</div>
</div>
</PopoverContent>
</Popover>
</div>
<div className="rounded-lg bg-background p-3">
<div className="flex items-center justify-between">
<span className="text-sm">
{t("cameraWizard.step3.go2rtc")}
</span>
<Switch
checked={stream.restream || false}
onCheckedChange={() => setRestream(stream.id)}
/>
</div>
</div>
</div>
</CardContent>
</Card>
))}
<Button
type="button"
onClick={addStream}
variant="outline"
className=""
>
<LuPlus className="mr-2 size-4" />
{t("cameraWizard.step3.addAnotherStream")}
</Button>
</div>
{!hasDetectRole && (
<div className="rounded-lg border border-danger/50 p-3 text-sm text-danger">
{t("cameraWizard.step3.detectRoleWarning")}
</div>
)}
<div className="flex flex-col gap-3 pt-6 sm:flex-row sm:justify-end sm:gap-4">
{onBack && (
<Button type="button" onClick={onBack} className="sm:flex-1">
{t("button.back", { ns: "common" })}
</Button>
)}
{onNext && (
<Button
type="button"
onClick={() => onNext?.()}
disabled={!canProceed || testingStreams.size > 0}
variant="select"
className="sm:flex-1"
>
{t("button.next", { ns: "common" })}
</Button>
)}
</div>
</div>
);
}

View File

@ -18,8 +18,9 @@ import { PlayerStatsType } from "@/types/live";
import { FaCircleCheck, FaTriangleExclamation } from "react-icons/fa6";
import { LuX } from "react-icons/lu";
import { Card, CardContent } from "../../ui/card";
import { maskUri } from "@/utils/cameraUtil";
type Step3ValidationProps = {
type Step4ValidationProps = {
wizardData: Partial<WizardFormData>;
onUpdate: (data: Partial<WizardFormData>) => void;
onSave: (config: WizardFormData) => void;
@ -27,13 +28,13 @@ type Step3ValidationProps = {
isLoading?: boolean;
};
export default function Step3Validation({
export default function Step4Validation({
wizardData,
onUpdate,
onSave,
onBack,
isLoading = false,
}: Step3ValidationProps) {
}: Step4ValidationProps) {
const { t } = useTranslation(["views/settings"]);
const [isValidating, setIsValidating] = useState(false);
const [testingStreams, setTestingStreams] = useState<Set<string>>(new Set());
@ -143,13 +144,13 @@ export default function Step3Validation({
if (testResult.success) {
toast.success(
t("cameraWizard.step3.streamValidated", {
t("cameraWizard.step4.streamValidated", {
number: streams.findIndex((s) => s.id === stream.id) + 1,
}),
);
} else {
toast.error(
t("cameraWizard.step3.streamValidationFailed", {
t("cameraWizard.step4.streamValidationFailed", {
number: streams.findIndex((s) => s.id === stream.id) + 1,
}),
);
@ -200,16 +201,16 @@ export default function Step3Validation({
(r) => r.success,
).length;
if (successfulTests === results.size) {
toast.success(t("cameraWizard.step3.reconnectionSuccess"));
toast.success(t("cameraWizard.step4.reconnectionSuccess"));
} else {
toast.warning(t("cameraWizard.step3.reconnectionPartial"));
toast.warning(t("cameraWizard.step4.reconnectionPartial"));
}
}
}, [streams, onUpdate, t, performStreamValidation]);
const handleSave = useCallback(() => {
if (!wizardData.cameraName || !wizardData.streams?.length) {
toast.error(t("cameraWizard.step3.saveError"));
toast.error(t("cameraWizard.step4.saveError"));
return;
}
@ -239,13 +240,13 @@ export default function Step3Validation({
return (
<div className="space-y-6">
<div className="text-sm text-muted-foreground">
{t("cameraWizard.step3.description")}
{t("cameraWizard.step4.description")}
</div>
<div className="space-y-4">
<div className="flex items-center justify-between">
<h3 className="text-lg font-medium">
{t("cameraWizard.step3.validationTitle")}
{t("cameraWizard.step4.validationTitle")}
</h3>
<Button
onClick={validateAllStreams}
@ -254,8 +255,8 @@ export default function Step3Validation({
>
{isValidating && <ActivityIndicator className="mr-2 size-4" />}
{isValidating
? t("cameraWizard.step3.connecting")
: t("cameraWizard.step3.connectAllStreams")}
? t("cameraWizard.step4.connecting")
: t("cameraWizard.step4.connectAllStreams")}
</Button>
</div>
@ -270,7 +271,7 @@ export default function Step3Validation({
<div className="flex flex-col space-y-1">
<div className="flex flex-row items-center">
<h4 className="mr-2 font-medium">
{t("cameraWizard.step3.streamTitle", {
{t("cameraWizard.step4.streamTitle", {
number: index + 1,
})}
</h4>
@ -331,7 +332,7 @@ export default function Step3Validation({
<div className="mb-3 flex items-center justify-between">
<div className="flex items-center gap-2">
<span className="text-sm">
{t("cameraWizard.step3.ffmpegModule")}
{t("cameraWizard.step4.ffmpegModule")}
</span>
<Popover>
<PopoverTrigger asChild>
@ -346,11 +347,11 @@ export default function Step3Validation({
<PopoverContent className="pointer-events-auto w-80 text-xs">
<div className="space-y-2">
<div className="font-medium">
{t("cameraWizard.step3.ffmpegModule")}
{t("cameraWizard.step4.ffmpegModule")}
</div>
<div className="text-muted-foreground">
{t(
"cameraWizard.step3.ffmpegModuleDescription",
"cameraWizard.step4.ffmpegModuleDescription",
)}
</div>
</div>
@ -374,7 +375,7 @@ export default function Step3Validation({
<div className="mb-2 flex flex-col justify-between gap-1 md:flex-row md:items-center">
<span className="break-all text-sm text-muted-foreground">
{stream.url}
{maskUri(stream.url)}
</span>
<Button
onClick={() => {
@ -402,17 +403,17 @@ export default function Step3Validation({
<ActivityIndicator className="mr-2 size-4" />
)}
{result?.success
? t("cameraWizard.step3.disconnectStream")
? t("cameraWizard.step4.disconnectStream")
: testingStreams.has(stream.id)
? t("cameraWizard.step3.connectingStream")
: t("cameraWizard.step3.connectStream")}
? t("cameraWizard.step4.connectingStream")
: t("cameraWizard.step4.connectStream")}
</Button>
</div>
{result && (
<div className="space-y-2">
<div className="text-xs">
{t("cameraWizard.step3.issues.title")}
{t("cameraWizard.step4.issues.title")}
</div>
<div className="rounded-lg bg-background p-3">
<StreamIssues
@ -455,7 +456,7 @@ export default function Step3Validation({
{isLoading && <ActivityIndicator className="mr-2 size-4" />}
{isLoading
? t("button.saving", { ns: "common" })
: t("cameraWizard.step3.saveAndApply")}
: t("cameraWizard.step4.saveAndApply")}
</Button>
</div>
</div>
@ -486,7 +487,7 @@ function StreamIssues({
if (streamUrl.startsWith("rtsp://")) {
result.push({
type: "warning",
message: t("cameraWizard.step1.errors.brands.reolink-rtsp"),
message: t("cameraWizard.step4.issues.brands.reolink-rtsp"),
});
}
}
@ -497,7 +498,7 @@ function StreamIssues({
if (["h264", "h265", "hevc"].includes(videoCodec)) {
result.push({
type: "good",
message: t("cameraWizard.step3.issues.videoCodecGood", {
message: t("cameraWizard.step4.issues.videoCodecGood", {
codec: stream.testResult.videoCodec,
}),
});
@ -511,20 +512,20 @@ function StreamIssues({
if (audioCodec === "aac") {
result.push({
type: "good",
message: t("cameraWizard.step3.issues.audioCodecGood", {
message: t("cameraWizard.step4.issues.audioCodecGood", {
codec: stream.testResult.audioCodec,
}),
});
} else {
result.push({
type: "error",
message: t("cameraWizard.step3.issues.audioCodecRecordError"),
message: t("cameraWizard.step4.issues.audioCodecRecordError"),
});
}
} else {
result.push({
type: "warning",
message: t("cameraWizard.step3.issues.noAudioWarning"),
message: t("cameraWizard.step4.issues.noAudioWarning"),
});
}
}
@ -534,7 +535,7 @@ function StreamIssues({
if (!stream.testResult?.audioCodec) {
result.push({
type: "error",
message: t("cameraWizard.step3.issues.audioCodecRequired"),
message: t("cameraWizard.step4.issues.audioCodecRequired"),
});
}
}
@ -544,7 +545,7 @@ function StreamIssues({
if (stream.restream) {
result.push({
type: "warning",
message: t("cameraWizard.step3.issues.restreamingWarning"),
message: t("cameraWizard.step4.issues.restreamingWarning"),
});
}
}
@ -557,14 +558,14 @@ function StreamIssues({
if (minDimension > 1080) {
result.push({
type: "warning",
message: t("cameraWizard.step3.issues.resolutionHigh", {
message: t("cameraWizard.step4.issues.resolutionHigh", {
resolution: stream.resolution,
}),
});
} else if (maxDimension < 640) {
result.push({
type: "error",
message: t("cameraWizard.step3.issues.resolutionLow", {
message: t("cameraWizard.step4.issues.resolutionLow", {
resolution: stream.resolution,
}),
});
@ -580,7 +581,7 @@ function StreamIssues({
) {
result.push({
type: "warning",
message: t("cameraWizard.step3.issues.dahua.substreamWarning"),
message: t("cameraWizard.step4.issues.dahua.substreamWarning"),
});
}
if (
@ -590,7 +591,7 @@ function StreamIssues({
) {
result.push({
type: "warning",
message: t("cameraWizard.step3.issues.hikvision.substreamWarning"),
message: t("cameraWizard.step4.issues.hikvision.substreamWarning"),
});
}
@ -662,7 +663,7 @@ function BandwidthDisplay({
return (
<div className="mb-2 text-sm">
<span className="font-medium text-muted-foreground">
{t("cameraWizard.step3.estimatedBandwidth")}:
{t("cameraWizard.step4.estimatedBandwidth")}:
</span>{" "}
<span className="text-secondary-foreground">
{streamBandwidth.toFixed(1)} {t("unit.data.kbps", { ns: "common" })}
@ -748,7 +749,7 @@ function StreamPreview({ stream, onBandwidthUpdate }: StreamPreviewProps) {
style={{ aspectRatio }}
>
<span className="text-sm text-danger">
{t("cameraWizard.step3.streamUnavailable")}
{t("cameraWizard.step4.streamUnavailable")}
</span>
<Button
variant="outline"
@ -757,7 +758,7 @@ function StreamPreview({ stream, onBandwidthUpdate }: StreamPreviewProps) {
className="flex items-center gap-2"
>
<LuRotateCcw className="size-4" />
{t("cameraWizard.step3.reload")}
{t("cameraWizard.step4.reload")}
</Button>
</div>
);
@ -771,7 +772,7 @@ function StreamPreview({ stream, onBandwidthUpdate }: StreamPreviewProps) {
>
<ActivityIndicator className="size-4" />
<span className="ml-2 text-sm">
{t("cameraWizard.step3.connecting")}
{t("cameraWizard.step4.connecting")}
</span>
</div>
);

View File

@ -15,7 +15,7 @@ import useSWR from "swr";
import ActivityIndicator from "../indicators/activity-indicator";
import { Event } from "@/types/event";
import { getIconForLabel } from "@/utils/iconUtil";
import { ReviewSegment } from "@/types/review";
import { REVIEW_PADDING, ReviewSegment } from "@/types/review";
import { LuChevronDown, LuCircle, LuChevronRight } from "react-icons/lu";
import { getTranslatedLabel } from "@/utils/i18n";
import EventMenu from "@/components/timeline/EventMenu";
@ -26,6 +26,7 @@ import { Link } from "react-router-dom";
import { Switch } from "@/components/ui/switch";
import { usePersistence } from "@/hooks/use-persistence";
import { isDesktop } from "react-device-detect";
import { resolveZoneName } from "@/hooks/use-zone-friendly-name";
import { PiSlidersHorizontalBold } from "react-icons/pi";
import { MdAutoAwesome } from "react-icons/md";
@ -192,7 +193,7 @@ export default function DetailStream({
<div className="relative flex h-full flex-col">
<div
ref={scrollRef}
className="scrollbar-container flex-1 overflow-y-auto pb-14"
className="scrollbar-container flex-1 overflow-y-auto overflow-x-hidden pb-14"
>
<div className="space-y-4 py-2">
{reviewItems?.length === 0 ? (
@ -348,7 +349,7 @@ function ReviewGroup({
? fetchedEvents.length
: (review.data.objects ?? []).length;
return `${objectCount} ${t("detail.trackedObject", { count: objectCount })}`;
return `${t("detail.trackedObject", { count: objectCount })}`;
}, [review, t, fetchedEvents]);
const reviewDuration = useMemo(
@ -390,8 +391,8 @@ function ReviewGroup({
)}
/>
</div>
<div className="mr-3 flex w-full justify-between">
<div className="ml-1 flex flex-col items-start gap-1.5">
<div className="mr-3 grid w-full grid-cols-[1fr_auto] gap-2">
<div className="ml-1 flex min-w-0 flex-col gap-1.5">
<div className="flex flex-row gap-3">
<div className="text-sm font-medium">{displayTime}</div>
<div className="relative flex items-center gap-2 text-white">
@ -407,7 +408,7 @@ function ReviewGroup({
</div>
<div className="flex flex-col gap-0.5">
{review.data.metadata?.title && (
<div className="mb-1 flex items-center gap-1 text-sm text-primary-variant">
<div className="mb-1 flex min-w-0 items-center gap-1 text-sm text-primary-variant">
<MdAutoAwesome className="size-3 shrink-0" />
<span className="truncate">{review.data.metadata.title}</span>
</div>
@ -431,7 +432,7 @@ function ReviewGroup({
e.stopPropagation();
setOpen((v) => !v);
}}
className="ml-2 inline-flex items-center justify-center rounded p-1 hover:bg-secondary/10"
className="inline-flex items-center justify-center self-center rounded p-1 hover:bg-secondary/10"
>
{open ? (
<LuChevronDown className="size-4 text-primary-variant" />
@ -477,7 +478,7 @@ function ReviewGroup({
<div className="rounded-full bg-muted-foreground p-1">
{getIconForLabel(audioLabel, "size-3 text-white")}
</div>
<span>{getTranslatedLabel(audioLabel)}</span>
<span>{getTranslatedLabel(audioLabel, "audio")}</span>
</div>
</div>
))}
@ -512,7 +513,8 @@ function EventList({
const isSelected = selectedObjectIds.includes(event.id);
const label = event.sub_label || getTranslatedLabel(event.label);
const label =
event.sub_label || getTranslatedLabel(event.label, event.data.type);
const handleObjectSelect = (event: Event | undefined) => {
if (event) {
@ -793,17 +795,29 @@ function ObjectTimeline({
},
]);
const { data: config } = useSWR<FrigateConfig>("config");
const timeline = useMemo(() => {
if (!fullTimeline) {
return fullTimeline;
}
return fullTimeline.filter(
(t) =>
t.timestamp >= review.start_time &&
(review.end_time == undefined || t.timestamp <= review.end_time),
);
}, [fullTimeline, review]);
return fullTimeline
.filter(
(t) =>
t.timestamp >= review.start_time - REVIEW_PADDING &&
(review.end_time == undefined ||
t.timestamp <= review.end_time + REVIEW_PADDING),
)
.map((event) => ({
...event,
data: {
...event.data,
zones_friendly_names: event.data?.zones?.map((zone) =>
resolveZoneName(config, zone),
),
},
}));
}, [config, fullTimeline, review]);
if (isValidating && (!timeline || timeline.length === 0)) {
return <ActivityIndicator className="ml-2 size-3" />;
@ -811,7 +825,7 @@ function ObjectTimeline({
if (!timeline || timeline.length === 0) {
return (
<div className="py-2 text-sm text-muted-foreground">
<div className="ml-8 text-sm text-muted-foreground">
{t("detail.noObjectDetailData")}
</div>
);

View File

@ -55,20 +55,24 @@ export default function EventMenu({
</DropdownMenuTrigger>
<DropdownMenuPortal>
<DropdownMenuContent>
<DropdownMenuItem onSelect={handleObjectSelect}>
<DropdownMenuItem
className="cursor-pointer"
onSelect={handleObjectSelect}
>
{isSelected
? t("itemMenu.hideObjectDetails.label")
: t("itemMenu.showObjectDetails.label")}
</DropdownMenuItem>
<DropdownMenuSeparator className="my-0.5" />
<DropdownMenuItem
className="cursor-pointer"
onSelect={() => {
navigate(`/explore?event_id=${event.id}`);
}}
>
{t("details.item.button.viewInExplore")}
</DropdownMenuItem>
<DropdownMenuItem asChild>
<DropdownMenuItem className="cursor-pointer" asChild>
<a
download
href={
@ -86,6 +90,7 @@ export default function EventMenu({
event.data.type == "object" &&
config?.plus?.enabled && (
<DropdownMenuItem
className="cursor-pointer"
onSelect={() => {
setIsOpen(false);
onOpenUpload?.(event);
@ -97,6 +102,7 @@ export default function EventMenu({
{event.has_snapshot && config?.semantic_search?.enabled && (
<DropdownMenuItem
className="cursor-pointer"
onSelect={() => {
if (onOpenSimilarity) onOpenSimilarity(event);
else

View File

@ -515,7 +515,7 @@ export function ReviewTimeline({
<div
className={`absolute z-30 flex gap-2 ${
isMobile
? "bottom-4 right-1 flex-col gap-3"
? "bottom-4 right-1 flex-col-reverse gap-3"
: "bottom-2 left-1/2 -translate-x-1/2"
}`}
>

View File

@ -1,4 +1,10 @@
import React, { createContext, useContext, useState, useEffect } from "react";
import React, {
createContext,
useContext,
useState,
useEffect,
useRef,
} from "react";
import { FrigateConfig } from "@/types/frigateConfig";
import useSWR from "swr";
@ -36,6 +42,23 @@ export function DetailStreamProvider({
() => initialSelectedObjectIds ?? [],
);
// When the parent provides a new initialSelectedObjectIds (for example
// when navigating between search results) update the selection so children
// like `ObjectTrackOverlay` receive the new ids immediately. We only
// perform this update when the incoming value actually changes.
useEffect(() => {
if (
initialSelectedObjectIds &&
(initialSelectedObjectIds.length !== selectedObjectIds.length ||
initialSelectedObjectIds.some((v, i) => selectedObjectIds[i] !== v))
) {
setSelectedObjectIds(initialSelectedObjectIds);
}
// Intentionally include selectedObjectIds to compare previous value and
// avoid overwriting user interactions unless the incoming prop changed.
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [initialSelectedObjectIds]);
const toggleObjectSelection = (id: string | undefined) => {
if (id === undefined) {
setSelectedObjectIds([]);
@ -63,10 +86,33 @@ export function DetailStreamProvider({
setAnnotationOffset(cfgOffset);
}, [config, camera]);
// Clear selected objects when exiting detail mode or changing cameras
// Clear selected objects when exiting detail mode or when the camera
// changes for providers that are not initialized with an explicit
// `initialSelectedObjectIds` (e.g., the RecordingView). For providers
// that receive `initialSelectedObjectIds` (like SearchDetailDialog) we
// avoid clearing on camera change to prevent a race with children that
// immediately set selection when mounting.
const prevCameraRef = useRef<string | undefined>(undefined);
useEffect(() => {
setSelectedObjectIds([]);
}, [isDetailMode, camera]);
// Always clear when leaving detail mode
if (!isDetailMode) {
setSelectedObjectIds([]);
prevCameraRef.current = camera;
return;
}
// If camera changed and the parent did not provide initialSelectedObjectIds,
// clear selection to preserve previous behavior.
if (
prevCameraRef.current !== undefined &&
prevCameraRef.current !== camera &&
initialSelectedObjectIds === undefined
) {
setSelectedObjectIds([]);
}
prevCameraRef.current = camera;
}, [isDetailMode, camera, initialSelectedObjectIds]);
const value: DetailStreamContextType = {
selectedObjectIds,

View File

@ -6,6 +6,7 @@ import { LivePlayerMode, LiveStreamMetadata } from "@/types/live";
export default function useCameraLiveMode(
cameras: CameraConfig[],
windowVisible: boolean,
activeStreams?: { [cameraName: string]: string },
) {
const { data: config } = useSWR<FrigateConfig>("config");
@ -20,16 +21,20 @@ export default function useCameraLiveMode(
);
if (isRestreamed) {
Object.values(camera.live.streams).forEach((streamName) => {
streamNames.add(streamName);
});
if (activeStreams && activeStreams[camera.name]) {
streamNames.add(activeStreams[camera.name]);
} else {
Object.values(camera.live.streams).forEach((streamName) => {
streamNames.add(streamName);
});
}
}
});
return streamNames.size > 0
? Array.from(streamNames).sort().join(",")
: null;
}, [cameras, config]);
}, [cameras, config, activeStreams]);
const streamsFetcher = useCallback(async (key: string) => {
const streamNames = key.split(",");
@ -68,7 +73,9 @@ export default function useCameraLiveMode(
[key: string]: LiveStreamMetadata;
}>(restreamedStreamsKey, streamsFetcher, {
revalidateOnFocus: false,
dedupingInterval: 10000,
revalidateOnReconnect: false,
revalidateIfStale: false,
dedupingInterval: 60000,
});
const [preferredLiveModes, setPreferredLiveModes] = useState<{

View File

@ -0,0 +1,41 @@
import { FrigateConfig } from "@/types/frigateConfig";
import { useMemo } from "react";
import useSWR from "swr";
export function resolveZoneName(
config: FrigateConfig | undefined,
zoneId: string,
cameraId?: string,
) {
if (!config) return String(zoneId).replace(/_/g, " ");
if (cameraId) {
const camera = config.cameras?.[String(cameraId)];
const zone = camera?.zones?.[zoneId];
return zone?.friendly_name || String(zoneId).replace(/_/g, " ");
}
for (const camKey in config.cameras) {
if (!Object.prototype.hasOwnProperty.call(config.cameras, camKey)) continue;
const cam = config.cameras[camKey];
if (!cam?.zones) continue;
if (Object.prototype.hasOwnProperty.call(cam.zones, zoneId)) {
const zone = cam.zones[zoneId];
return zone?.friendly_name || String(zoneId).replace(/_/g, " ");
}
}
// Fallback: return a cleaned-up zoneId string
return String(zoneId).replace(/_/g, " ");
}
export function useZoneFriendlyName(zoneId: string, cameraId?: string): string {
const { data: config } = useSWR<FrigateConfig>("config");
const name = useMemo(
() => resolveZoneName(config, zoneId, cameraId),
[config, cameraId, zoneId],
);
return name;
}

Some files were not shown because too many files have changed in this diff Show More