Παρακαλώ χρησιμοποιήστε αυτό το αναγνωριστικό για να παραπέμψετε ή να δημιουργήσετε σύνδεσμο προς αυτό το τεκμήριο: http://hdl.handle.net/11400/8075
Τίτλος: Data fusion from multiple sources for the production of orthographic and perspective views with automatic visibility checking
Συγγραφείς: Γραμματικόπουλος, Λάζαρος
Καλησπεράκης, Ηλίας
Καρράς, Γιώργος
Πέτσα, Έλλη
Τύπος αντικειμένου: Δημοσίευση σε συνέδριο
Τύπος δημοσίευσης σε συνέδριο: Πλήρης δημοσίευση (full paper)
Λέξεις-κλειδιά: Automation;Visualization;Orthorectification;Ορθοαναγωγή;DEM/DTM;Laser scanning;Σάρωση laser;Αυτοματοποίηση;Οραματισμός
Θέματα: Topography
Geodesy
Τοπογραφία
Γεωδαισία
Ημερομηνία Δημοσίευσης: 17-Μαρ-2015
Ημερομηνία Διαθεσιμότητας: 17-Μαρ-2015
Περίληψη: Orthophotography – and photo-textured 3D surface models, in general – are most important photogrammetric products in heritageconservation. However, it is now common knowledge that conventional orthorectification software accepts only surface descriptionsobtained via 2D triangulation and cannot handle the question of image visibility. Ignoring multiple surface elevations and image occlusionsof the complex surface shapes, typically met in conservation tasks, results in products visually and geometrically distorted.Tiresome human intervention in the surface modeling and image orthorectification stages might partly remedy this shortcoming. Forsurface modeling, however, laser scanners allow now collection of numerous accurate surface points and creation of 3D meshes. Theauthors present their approach for an automated production of correct orthoimages (and perspective views), given a multiple imagecoverage with known calibration/orientation data and fully 3D surface representations derived through laser scanning. The developedalgorithm initially detects surface occlusions in the direction of projection. Next, all available imagery is utilised to establish a colourvalue for each pixel of the new image. After back-projecting (using the bundle adjustment data) all surface triangles onto all initialimages to establish visibilities, texture ‘blending’ is performed. Suitable weighting controls the local radiometric contribution of eachparticipating source image, while outlying colour values (due mainly to registration and modeling errors) are automatically filteredwith a simple statistical test. The generation of a depth map for each original image provides a means to further restrict the effects oforientation and modeling errors on texturing, mainly by checking closeness to occlusion borders. This ‘topological’ information mayalso allow establishing suitable image windows for colour interpolation. Practical tests of the implemented algorithm, using imageswith multiple overlap and two 3D models, indicate that this fusion of laser scanner and photogrammetry is indeed capable to automaticallysynthesize novel views from multiple images. The developed approach, combining an outcome of geometric accuracy andvisual quality with speed, appears as a realistic approach in heritage conservation. Further necessary elaborations are also outlined.
Γλώσσα: Αγγλικά
Συνέδριο: Proc. XX CIPA International Symposium
Πρόσβαση: Δημόσια διαθέσιμο
Άδεια χρήσης: Αναφορά Δημιουργού-Μη Εμπορική Χρήση-Όχι Παράγωγα Έργα 3.0 Ηνωμένες Πολιτείες
URI: http://hdl.handle.net/11400/8075
Εμφανίζεται στις συλλογές:Αποτελέσματα ερευνητικών έργων

Αρχεία σε αυτό το τεκμήριο:
Δεν υπάρχουν αρχεία συσχετισμένα με αυτό το τεκμήριο


Αυτό το τεκμήριο προστατεύεται από άδεια Άδεια Creative Commons Creative Commons