Online Estimation of Trifocal Tensors for Augmenting Live Video

AuthorSearch for: ; Search for: ; Search for:
TypeBook Chapter
AbstractWe propose a method to augment live video based on the tracking of natural features, and the online estimation of the trinocular geometry. Previous without-marker approaches require the computation of camera pose to render virtual objects. The strength of our proposed method is that it doesn't require tracking of camera pose, and exploits the usual advantages of marker-based approaches for a fast implementation. A 3-view AR system is used to demonstrate our approach. It consists of an uncalibrated camera that moves freely inside the scene of interest, and of three reference frames taken at the time of system initialization. As the camera is moving, image features taken from an initial triplet set are tracked throughout the video sequence. And the trifocal tensor associated with each frame is estimated online. With this tensor, the square pattern that was visible in the reference frames is transferred to the video. This invisible pattern is then used by the ARToolkit to embed virtual objects.
Publication date
AffiliationNRC Institute for Information Technology; National Research Council Canada
Peer reviewedNo
NRC number47365
NPARC number8914015
Export citationExport as RIS
Report a correctionReport a correction
Record identifier9546a49e-aca7-4a08-a9dd-9a99cc2487bc
Record created2009-04-22
Record modified2016-05-09
Bookmark and share
  • Share this page with Facebook (Opens in a new window)
  • Share this page with Twitter (Opens in a new window)
  • Share this page with Google+ (Opens in a new window)
  • Share this page with Delicious (Opens in a new window)
Date modified: