- Marcar como novo
- Marcador
- Subscrever
- Silenciar
- Subscrever fonte RSS
- Destacar
- Imprimir
- Denunciar conteúdo inapropriado
Is it possible to see the tracked features in an image of the enviroment the t265 uses for the slam algorithm?
best regards
Link copiado
- Marcar como novo
- Marcador
- Subscrever
- Silenciar
- Subscrever fonte RSS
- Destacar
- Imprimir
- Denunciar conteúdo inapropriado
The RealSense Viewer software provides 2D visualization of IMU and Tracking data. 3D visualization is available for Pose samples.
Another way to visualize tracking information would be through ROS.
- Marcar como novo
- Marcador
- Subscrever
- Silenciar
- Subscrever fonte RSS
- Destacar
- Imprimir
- Denunciar conteúdo inapropriado
thx. Am I right, that the cameras track special information in a room, for example the edge of a chair or something. And this is what I want to see in an image. Is that possible? Is it also possible to change the (min/max)number of features the cameras should track or is it fixed somewhere deep in the hardware/software?
- Marcar como novo
- Marcador
- Subscrever
- Silenciar
- Subscrever fonte RSS
- Destacar
- Imprimir
- Denunciar conteúdo inapropriado
The T265 can recognize visual cues, patterns and landmarks in the environment (such as patterns on walls and floors) and use them for navigation. An Intel spokesperson said: "“You can use things like patterns on the wall or on the floor or on the ceiling to infer your location, which would not be observable with a depth camera or lidar”. It helps robots to avoid getting lost, something known as the "kidnapped robot" problem.
I carefully researched your question about the min-max number of trackable features but could not find such a feature. I believe the 163 ±5° combined FOV of the fisheye lenses is designed to capture as many landmarks as possible.
- Subscrever fonte RSS
- Marcar tópico como novo
- Marcar tópico como lido
- Flutuar este Tópico para o utilizador atual
- Marcador
- Subscrever
- Página amigável para impressora