SlideShare a Scribd company logo
44 
3D-ICONS Guidelines
44 
GUIDELINES 
First published in 2014 by 3D-ICONS 
©3D-ICONS 
Design and layout by: Ian McCarthy 
Printed in Ireland by: Paceprint, Shaws Lane, Sandymount, Dublin 4, Ireland 
3D-ICONS is a project funded under the European Commission’s ICT Policy Support Programme, 
project no. 297194. 
The views and opinions expressed in this presentation are the sole responsibility of the authors and 
do not necessarily reflect the views of the European Commission.
44 
CONTENTS 
Introduction 06 
Guidelines 08 
3D Data Capture Techniques 10 
Short range techniques 11 
Long & mid range techniques 13 
Multi Scale Image Based Methods 14 
Post Processing of 3D Content 18 
Post-Process A - Geometric reconstruction 18 
Post-Process B - Model structuring 23 
Post Process C - Visual enrichment of 3D models 24 
Post Process D - Hypothetical reconstruction 25 
Creating complementary 2D media (derived from the 3D model) 26 
3D Publishing Methodology 28 
Online publication technologies 29 
IPR Considerations 35
44 
Metadata 36 
CARARE 2.0 Metadata Schema 37 
Resources for CARARE Metadata Creation 40 
Relating Metadata to Europeana 43 
LICENSING & IPR Considerations 44 
IPR & the 3D pipeline 45 
Access Agreement 46 
CREATIVE COMMONS 48 
Appendix 1: Additional 3D-ICONS RESOURCES 50 
Appendix 2: Project Partners 52
Introduction 
Public fascination with the architectural and archaeological heritage is well 
known, it is proven to be one of the main reasons for tourism according to the UN 
World Tourism Organisation. Historic buildings and archaeological monuments 
form a signicant component Europe’s cultural heritage; they are the physical 
testimonies of European history and of the dierent events that led to the 
creation of the European landscape, as we know it today. 
44 
The documentation of built heritage increasingly avails of 3D scanning and other remote sensing 
technologies, which produces digital replicas in an accurate and fast way. Such digital models have a large 
range of uses, from the conservation and preservation of monuments to the communication of their cultural 
value to the public. They may also support in-depth analysis of their architectural and artistic features as well 
as allow the production of interpretive reconstructions of their past appearance. 
The goal of the 3D-ICONS project, funded under the European Commission’s ICT Policy Support Programme 
which builds on the results of CARARE (www.carare.eu) and 3D-COFORM (www.3d-coform.eu), is to provide 
Europeana with 3D models of architectural and archaeological monuments of remarkable cultural 
importance. The project brings together 16 partners (see appendix 2) from across Europe (11 countries) with 
relevant expertise in 3D modelling and digitization. The main purpose of this project is to produce around 
4000 accurate 3D models which have to be processed into a simplied form in order to be visualized on low 
end personal computers and on the web. 
The structure of this publication has been created with two distinct sections: 
Guidelines: Documentation of the digitisation, modelling and online access pipeline for the creation 
o f online 3d models of cultural heritage objects. 
Case Studies: 28 examples of 3D content creation by the 3D-ICONS partners across a range of 
monuments, architectural features and artefacts. 
06
4407 
IMAGE OF 3D CAPTURED DATA E.G. POINT CLOUD, MESH model of 
CHRYSIPPUS HEAD 
Greyscale radiance scaling shaded version of the Church of 
the Holy Apostles 3D model 
THE CENACLE COMPLEX - Xray filter view re-coloured, generated by meshlab
Guidelines 
44 
The 3D-ICONS project exploits existing tools and methodologies and integrates 
them in a complete supply chain of 3D digitization to contribute a signicant 
mass of 3D content to Europeana. These guidelines aim to document this 
complete pipeline which covers all technical and logistic aspects to create 
3D models of cultural heritage objects with no established digitization. 
Each section of these guidelines corresponds to one of the ve interlinked stages of the 3D-ICONS pipeline: 
1. 3D Data Capture Techniques 
2. Post Processing of 3D Content 
3. 3D Publishing Methodology 
4. Metadata 
5. Licensing  IPR Considerations 
When reading the guidelines it is important to understand that each stage in the processing pipeline is 
interrelated, and therefore one should look at the pipeline as a holistic approach to the challenge of 
capturing and presenting 3D models of cultural heritage models. Data capture, post processing and 
3D publishing activities normally occur sequentially after each other. 
The direction of these activities is not only towards the nal online 3D model. In carrying out your own 3D 
heritage eorts, one should also consider the nal potential publishing methodology, and travel back up the 
supply chain to identify what are the most appropriate capture and modelling techniques to provide this 
online 3D solution. The processes involved with the creation of metadata and the selection of appropriate 
data licensing should be integrated at all stages of the pipeline. 
08
44 
Capture MODELLING ONLINE DELIVERY 
METADATA 
LICENSING 
These guidelines are a product of the eort of all project partners’ and are the synthesis of several project 
publications (see appendix 1) which can be consulted for in-depth documentation of the dierent 
components of the pipeline. The guidelines do not represent an exhaustive list of all the potential 
processing paths but provide, describe and explore the solutions provided by the 3D-ICONS project. 
09
44 
3D Data Capture Techniques 
Capture MODELLING ONLINE DELIVERY 
METADATA 
In recent years the development of technologies and techniques for the surface data capture of 
three-dimensional artefacts and monuments has allowed both geometrical and structural information to be 
documented. Several approaches have been developed, each of which addresses dierent circumstances and 
records dierent characteristics of the 3D artefact or monument. 
ACTIVE METHODS 
3D Data CAPTURE 
PASSIVE METHODS 
IMAGE BASED 
METHODS 
LASER SCANNING STRUCTURED LIGHT RANGE SENSING 
TIME OF FLIGHT PHASE SHIFT 
At present there is a wide range of 3D acquisition 
technologies, which can be generally classied 
into contact and non-contact systems. Contact 
systems are not popular in the Cultural Heritage 
(CH) domain as they require physical contact with 
potentially fragile artefacts and structures. 
In contrast, non-contact systems have been used 
over the last decade in many CH digitisation 
projects with success. Non-contact systems are 
divided into active (those which emit their own 
electromagnetic energy for surface detection) 
and passive (those which utilise ambient 
electromagnetic energy for surface detection). 
Taxonomy of 3D data capture techniques 
LICENSING 
10
44 
Active range-sensing instruments work without contact with the artefact and hence full the requirement that recording devices 
will not potentially damage the artefact. In addition, their luminous intensity is limited to relatively small values and thus does 
not cause material damage (e.g. by bleaching pigments). These two properties make them particularly adapted for the 
applications in CH, where non-invasive and non-destructive analyses are crucial for the protection of heritage. 
The capabilities of the dierent technologies vary in terms of several criteria which must be considered and balanced when 
formulating appropriate campaign strategies. These include: 
• Resolution – the minimum quantitative distance between two consecutive measurements. 
• Accuracy - what is the maximum level of recorded accuracy? 
• Range – how close or far away can the device record and object? 
• Sampling rate – the minimum time between two consecutive measurements? 
• Cost – what is the expense of the equipment and software to purchase or lease? 
• Operational environmental conditions – in what conditions will this method work, i.e. is a dark working 
environment required? 
• Skill requirements – is extensive training required to carry out the data capture technique? 
• Use – what the 3D data will be used for, i.e. scientic analysis or visualisation? 
• Material – from what substance is the artefact/monument fabricated? 
There are signicant variations between the capabilities of dierent approaches. For example, triangulation techniques can 
produce greater accuracy than time-of-ight, but can only be used at relatively short range. Where great accuracy is a 
requirement, this can normally only be achieved with close access to the heritage object to be digitized ( 1m). If physical 
access to the artefact is dicult or requires the construction of special scaolding, other constraints need consideration (e.g. 
using an alternative non-invasive techniques). Alternatively, if physical access is impractical without unacceptable levels of 
invasive methods, then sensing from a greater distance maybe required utilising direct distance measurement techniques (TOF, 
Phase Deviation) leading to less accurate results. When selecting the appropriate methodology, consideration must also be given 
to the length of time available to carry out the data collection process and the relative speed of data capture of each technology. 
Short Range Techniques 
Laser Triangulation (LT) 
One of the most widely used active acquisition methods is Laser Triangulation (LT). The method is based on an instrument that 
carries a laser source and an optical detector. The laser source emits light in the form of a spot, a line or a pattern on the surface of 
the object while the optical detector captures the deformations of the light pattern due to the surface’s morphology. The depth 
is computed by using the triangulation principle. Laser scanners are known for their high accuracy in geometry measurements 
(50m) and dense sampling (100m). Current LT systems are able to oer perfect match between distance measurements and 
colour information. The method being used proposes the combination of three laser beams (each with a wavelength close to one 
of the three primary colours) into an optical bre. The acquisition system is able to capture both geometry and colour using 
the same composite laser beam while being unaected by ambient lighting and shadows. 
11
44 
Camera 
Collecting 
lens 
Projection 
lens 
Laser Source 
Object 
BasEline 
Structured Light (SL) 
Structured Light (SL) - also known as fringe 
projection systems - is another popular active 
acquisition method that is based on 
projecting a sequence of dierent alternated 
dark and bright stripes onto the surface of an 
object and extracting the 3D geometry by 
monitoring the deformations of each pattern. 
By examining the edges of each line in the 
pattern, the distance from the scanner to the 
object’s surface is calculated by trigonometric 
triangulation. Signicant research has been 
carried out on the projection of fringe patterns 
that are suitable for maximizing the 
measurement resolution. Current research is 
focused on developing SL systems that are 
able to capture 3D surfaces in real-time. This is 
achieved by increasing the speed of projection 
patterns and capturing algorithms. 
SHAPED OBJECT 
LIGHT STRIPE 
OBJECT PIXEL 
MATRIX CAMERA 
TRIANGULATION BASE 
STRIPE NUMBER 
STRIPE PROJECTOR 
CAMERA PIXEL 
Diagram illustrating the principles of laser triangulation (LT) based range devices 
Diagram illustrating the principles of structured light (SL) measurement devices 
12
44 
-3.00 
Intensity 
Inside Instrument 
STOP START 
n s 
-2.00 -1.00 1.00 2.00 3.00 
OVERVIEW 
Lens Outside Instrument 
Distance 
DISTANCE 
Light Emitted Light Returned History of Emitted Light Phase Shift 
0.00 
Measurement 
Object 
“ns stop watch” 
transmitter 
detector 
receiver 
S 
Long  Mid Range Techniques 
Time of Flight (TOF) 
Time-Of-Flight (TOF) - also known as terrestrial laser scanning - is an active method commonly used for the 3D digitisation of 
architectural heritage (e.g. an urban area of cultural importance, a monument, an excavation, etc). The method relies on a laser 
rangender which is used to detect the distance of a surface by timing the round-trip time of a light pulse. By rotating the laser 
and sensor (usually via a mirror), the scanner can scan up to a full 360 degrees around itself. The accuracy of such systems is 
related to the precision of its timer. For longer distances (modern systems allow the measurement of ranges up to 6km), TOF 
systems provide excellent results. An alternative approach to TOF scanning is Phase-Shift (PS), also an active acquisition method, 
used in closer range distance measurements systems. Again they are based on the round trip of the laser pulse but instead of 
timing the trip they measure the wavelength phase dierence between the outgoing and return laser pulse to provide a more 
precise measurement. 
Diagram illustrating the principles of time of flight (TOF) measurement devices 
Diagram illustrating the principles of phase shift (PS) measurement devices 
13
44 
A B 
Ia 
Ib 
Pa 
Image overlap 
Model 
Pb 
Multi Scale Image based Methods 
Traditional Photogrammetry 
Image-based methods can be considered as the passive version of SL. In principle, image-based methods involve stereo 
calibration, feature extraction, feature correspondence analysis and depth computation based on corresponding points. 
It is a simple and low cost (in terms of equipment) approach, but it involves the challenging task of correctly identifying 
common points between images. Photogrammetry is the primary image-based method that is used to determine the 2D and 
3D geometric properties of the objects that are visible in an image set. The determination of the attitude, the position and the 
intrinsic geometric characteristics of the camera is known as the fundamental photogrammetric problem. It can be described 
as the determination of camera interior and exterior orientation parameters, as well as the determination of the 3D 
coordinates of points on the images. Photogrammetry can be divided into two categories. These are the aerial and the 
terrestrial photogrammetry. 
In aerial photogrammetry, images are acquired via overhead shots from an aircraft or an UAV, whilst in terrestrial 
photogrammetry images are captured from locations near or on the surface of the earth. Additionally, when the object size 
and the distance between the camera and object are less than 100m then terrestrial photogrammetry is also dened as close 
range photogrammetry. The accuracy of photogrammetric measurements is largely a function of the camera’s optics quality 
and sensor resolution. Current commercial and open photogrammetric software solutions are able to quickly perform tasks 
such as camera calibration, epipolar geometry computations and textured map 3D mesh generation. Common digital images 
can be used and under suitable conditions high accuracy measurements can be obtained. The method can be considered 
objective and reliable. Using modern software solutions it can be relatively simple to apply and has a low cost. When 
combined with accurate measurements derived from a total station for example it can produce models of high accuracy 
for scales of 1:100 and even higher. 
Overlapping area of images captured at A and B are resolved within the 3D model space to enable the precise and accurate 
measurement of the model 
14
L1 L2 
44 
Airbase 
f f 
o1 
b1 b2 t1 t2 
 
o2 
02 
The basic principle of stereo photogrammetry. The building appears in two images, taken at L and L2 respectively. The top of 
the buildiing is represented by the points a1 and a2 and the base by b1 and b2 
01 
T 
B 
Semi Automated Image Based Methods 
In recent times, the increase in the computation power has allowed the introduction of semi automated image-based methods. 
Such an example is the combination of Structure-From-Motion (SFM) and Dense Multi-View 3D Reconstruction (DMVR) methods. 
They can be considered as the current extension of image-based methods. Over the last few years a number of software 
solutions implementing the SFM-DMVR algorithms from unordered image collections have been made available to the broad 
public. More specically SFM is considered an extension of stereo vision, where instead of image pairs the method attempts to 
reconstruct depth from a number of unordered images that depict a static scene or an object from arbitrary viewpoints. 
Apart from the feature extraction phase, the trajectories of corresponding features over the image collection are also computed. 
The method mainly uses the corresponding features, which are shared between dierent images that depict overlapping areas, 
to calculate the intrinsic and extrinsic parameters of the camera. These parameters are related to the focal length, the image 
format, the principal point, the lens distortion coecients, the location of the projection centre and the image orientation in 
3D space. Many systems involve the bundle adjustment method in order to improve the accuracy of calculating the camera 
trajectory within the image collection, minimise the projection error and prevent the error-built up of the camera 
position tracking. 
15
44 
q11 q1i 
P Camera 1 
Q1 Qj 
q2j 
Camera 2 
1 
q21 
P 2 
qi1 qij 
Camera i Pi 
Diagram illustrating the principles of structure from motion (SFM) measurement from multiple overlapping images 
Example of SFM methodology illustration the orientation and number of overlapping images utilised in the modeling of a 
building (CETI) 
16
44 
The resulting 3D point cloud data sets derived using SFM (CETI) 
Software Comments 
Web-service where the user uploads an image collection and the system returns 
a dense 3D reconstruction of the scene. The resulting 3D reconstruction is 
created using cloud computing technology and can be parsed by Meshlab 
Service is a part of a set of tools that are freely oered by the company and aim 
towards the ecient creation and publishing of 3D content on the Web. Their 
service can be accessed by a dedicated 3D data viewing-processing software 
tool that recently has been made available for the iOS mobile platform 
Web-based 3D reconstruction from images service. The user can upload the 
images through the Website’s interface without the need of downloading any 
standalone software application 
Reconstructs the content of an image collection as a 3D dense point cloud but it 
requires the positioning of specic photogrammetric targets around the scene 
in order to calibrate the camera 
Open source solution to create 3D models from photographs. The software 
doesn’t provide a DMVR option, but allows the user to manually create low 
complexity 3D meshes that can be textured automatically (image 
back-projection) by the software 
SFM-DMVR software solution can merge the independent depth maps of all 
images and then produce a single vertex painted point cloud that can be 
converted to a triangulated 3D mesh of dierent densities 
Software is able to create 3D digital elevation models from image collections 
captured by UAVs. The software is being oered as a standalone application or 
as a Web-service 
Automatic Reconstruction Conduit 
ARC 3D www.arc3d.be 
123D Catch (Autodesk) 
www.123dapp.com/catch 
Hypr3D (Viztu Technologies) 
www.hypr3d.com 
PhotoModeler Scanner (Eos Systems) 
www.photomodeler.com 
Insight3D 
insight3d.sourceforge.net 
PhotoScan (Agisoft) 
www.agisoft.ru 
Pix4D 
pix4d.com 
17 
There are many instances of SFM and DMVR software which are summarised in the table above
44 
Post Processing of 3D 
Capture MODELLING ONLINE DELIVERY 
METADATA 
LICENSING 
3D post-processing is a complex procedure consisting of a sequence of processing steps that result in the 
direct improvement of acquired 3D data (by laser scanning, photogrammetry), and its transformation into 
visually enriched (and in some cases semantically structured) geometric representations. Post-processing 
also allows the creation of multiple 3D models starting from the same gathered data according to the 
desired application, level of detail and other additional criteria. The results of the post-processing phase 
are 3D geometric representations accompanied by complementary 2D media, which are the digital assets 
ready to be converted (or embedded) into the nal web publishing formats. 
Post-Process A - Geometric reconstruction 
Geometric reconstruction is the essential processing step for the elaboration of a 3D representation of an artefact or monument 
following the capture of 3D digitisation. This can be achieved using several relevant techniques which must be chosen 
based upon: 
•5IFNPSQIPMPHJDBMDPNQMFYJUZPGUIFPCKFDU	GSPNHFPNFUSJDQSJNJUJWFTUPPSHBOJDTIBQFT
 
•5IFTDBMFPGUIFPCKFDU 
•8IBUUIFöOBMNPEFMXJMMCFVTFEGPS	SBOHJOHGSPNNFUSJDBOBMZTJTUPQVCMJDEJTTFNJOBUJPO
 
18
44 
Automatic meshing from a dense 3D point cloud 
The simple criteria for choosing and evaluating a relevant 3D geometric reconstruction technique is the degree of consistency 
of the 3D model compared to the real object. These guidelines are primarily concerned with the creation of 3D models from 
digitised data therefore this processing method will focus upon the automated meshing of 3D data from point-cloud data. 
However, additional methods are available for the 3D reconstruction, including (in order of level of approximation 
to reality): 
•*OUFSBDUJWFPSTFNJBVUPNBUJDSFDPOTUSVDUJPOCBTFEPOSFMFWBOUQSPöMFT 
•*OUFSBDUJWFPSTFNJBVUPNBUJDSFDPOTUSVDUJPOCBTFEPOQSJNJUJWFTBEKVTUNFOU 
•*OUFSBDUJWFSFDPOTUSVDUJPOCBTFEPOUFDIOJDBMJDPOPHSBQIZ	QMBOT
DSPTTTFDUJPOTBOEFMFWBUJPOT
 
•*OUFSBDUJWFSFDPOTUSVDUJPOCBTFEPOBSUJTUJDJDPOPHSBQIZ	TLFUDIFT
QBJOUJOHT
FUD
 
Point cloud data 
Once an artefact and monuments has been digitised the initial results (raw data) can be represented by a series of three dimen-sional 
data points in a coordinate system commonly called a point cloud. The processing of point clouds involves cleaning and 
the alignment phases. The cleaning phase involves the removal of all the non-desired data. Non-desired data would include 
the poorly captured surface areas (e.g. high deviation between laser beam and surface’s normal), the areas that belong to other 
objects (e.g. survey apparatus, people), the outlying points and any other badly captured areas. 
Another common characteristic of the raw data is noise. Noise can be described as the random spatial displacement of vertices 
around the actual surface that is being digitised. Compared to active scanning techniques such as laser scanning, image based 
techniques suer more from noise artefacts. Noise ltering is in an essential step that requires cautious application as it eects 
the ne morphological details been described by the data. 
Image of intenity shaded point cloud model of Cahergal stone fort (DiscoveRy Programme) 
19
44 
Processing mesh data 
The next stage in the processing pipeline is the production of a surfaced or “wrapped” 3D model. The transformation of point 
cloud data into a surface of triangular meshes is the procedure of grouping triplets of point cloud vertices to compose a triangle. 
The representation of a point cloud as a triangular mesh does not eliminate the noise being carried by the data. Nevertheless, 
the noise ltering of a triangular mesh is more ecient in terms of algorithm development due to the known surface topology 
and the surface normal vectors of the neighbouring triangles. Several processes must be completed to produce a topologically 
correct 3D mesh model. 
Image of point cloud data set and subsequent derived mesh model (DiscoveRy Programme) 
Mesh Cleaning 
Incomplete or problematic data from digitising an object in three dimensions is another common situation. Discontinuities 
(e.g. holes) in the data are introduced in each partial scan due to occlusions, accessibility limitation or even challenging surface 
properties. The procedure of lling holes is handled in two steps. The rst step is to identify the areas that contain missing data. 
For small regions, this can be achieved automatically using currently available 3D data processing software solutions. However, 
for larger areas signicant user interaction is necessary for their accurate identication. Once the identication is completed, the 
reconstruction of the missing data areas can be performed by using algorithms that take into consideration the curvature trends 
of the holes boundaries. Filling holes of complex surfaces in not a trivial task and can only be based on assumptions about the 
topology of the missing data. Additional problems identied in a mesh may include spikes, unreferenced vertices, and non-manifold 
edges, and these should also be removed during the cleaning stage. Meshing software (such as Meshlab or Geomagic 
Studio) has several routines to assist in the cleaning of problem areas of meshes. 
Illustration of the identification and closing of holes within the 3D mesh model (DiscoveRy Programme) 
20
44 
Mesh Simplification 
The mesh simplication, also known as decimation, is one of the most common approaches in reducing the amount of 
data needed to describe the complete surface of an object. In most cases the data produced by the 3D acquisition system 
includes vast amounts of superuous points. As a result, the size of the raw data is often prohibitive for interactive 
visualisation applications, and hardware requirements are beyond the standard computer system of the average user. 
Mesh simplication methods reduce the amount of data required to describe the surface of an object while retaining the 
geometrical quality of the 3D model within the specications of a given application. A popular method for signicantly 
reducing the number of vertices of a triangulated mesh, while maintaining the overall appearance of the object, is the 
quadric edge collapse decimation. This method merges the common vertices from adjacent triangles that lie on at 
surfaces, aiming to reduce the polygons number without sacricing signicant details from the object. Most simplication 
methods can signicantly improve the 3D mesh eciency in terms of data size. 
Illustration of high resolution polygon mesh model and simplified low polygon mesh model (DiscoveRy Programme) 
Mesh retopologisation 
Extreme simplication of complex meshes, such as for use 
in computer games and simulations, usually cannot be 
done automatically. Important features are dissolved and in 
extreme conditions even topology is compromised. 
Decimating a mesh at an extreme level can be achieved 
by an empirical technique called retopology. This is a 3D 
modelling technique, where special tools are used by the 
operator to generate a simpler version of the original dense 
model, by utilising the original topology as a supportive 
underlying layer. This technique keeps the number of 
polygons at an extreme minimum, while at the same time 
allow the user to select which topological features should 
be preserved from the original geometry. Retopology 
Image illustrating a low polygon mesh before (left ) and 
after retopologisation (left) 
modelling can also take advantage of parametric surfaces, like NURBS, in order to create models of innite delity while requiring 
minimum resources in terms of memory and processing power. Some of the commonly available software that can be used to 
perform the retopology technique include: 3D Coat, Mudbox, Blender, ZBrush, GSculpt, Meshlab Retopology Tool ver 1.2. Mesh 
retopologisation can be a time consuming process, however, it produces better quality light weight topology than automatic 
decimation. It also facilitates the creation of humanly recognizable texture maps. 
21
44 
TEXTURE MAPPING 
Modern rendering technologies, both interactive and non-interactive, allow the topological enhancement of low complexity 
geometry with special 2D relief maps, that can carry high frequency information about detailed topological features such as 
bumps, cracks and glyphs. Keeping this type of morphological features in the actual 3D mesh data requires a huge amount of 
additional polygons. However, expressing this kind of information as a 2D map and applying it while rendering the geometry 
can be by far more ecient. This can be achieved by taking advantage of modern graphics cards hardware and at the same time 
keeping resource requirements at a minimum. Displacement maps are generated using specialised 3D data processing software, 
e.g. the open source software xNormal. The software compares the distance from each texel on the surface of the simplied mesh 
against the surface of the original mesh and creates a 2D bitmap-based displacement map. 
Diagram illustrating the different texture maps which can be employed to enhance the display of a lightweight 3D model. 
From top: UV map, normal map, image map and ambient occlusion map (DISCOVERY PROGRAMME) 
22
44 
Post-Process B - Model structuring 
Depending on the scale and on the morphological complexity, a geometric 3D reconstruction of an artefact, architectural detail 
or an archaeological site generally leads to the representation of a single (and complex) geometric mesh or a collection of 
geometric entities organized according to several criteria. The model structuring strategy is usually carried out with the aim of 
harmonizing the hierarchical relations, which can express the architectural composition of a building (e.g. relations between 
entities and layouts) and can also be used as a guideline for structuring the related metadata. In some cases, it may be 
important to identify a domain expert to ensure the consistency of the chosen segmentation (e.g. temporal components) and 
nomenclature (e.g. specialized vocabulary) is coherent with archaeological and architectural theories. 
Examples of geometric reconstruction techniques (CNRS-MAP) 
According to the technique used and to the general purpose of the 3D representation, the results of a geometric reconstruction 
can be structured in four ways: 
1. Single unstructured entity (e.g. dense point clouds, or detailed mesh) 
2. Decomposed in elementary entities (e.g. 3D models composed by few parts) 
3. Decomposed in elementary entities hierarchically organized (e.g. 3D models decomposed in several parts for 
expressing the architectural layouts) 
4. Decomposed in entities organized in classes (e.g. 3D models decomposed in several parts for expressing the 
classication of materials, temporal states, etc.) 
According to the chosen model structuring strategy, the nal dataset structure (including geometry and visual enrichment) can 
be composed in several ways. 
Example of 3D model structuring (CNRS-MAP) : on the left, according to temporal states; on the right, 
according to a morphological segmentation (architectural units) (CNRS-MAP) 
23
44 
3D Geometry 
•4JOHMFTUSVDUVSFE%öMFXJUIPOFMFWFMof detail 
•.VMUJQMFJOEFQFOEFOU%öMFTXJUIPOFMFWFMPGEFUBJM 
•.VMUJQMFJOEFQFOEFOU%öMFTXJUINVMUJQMFMFWFMPGEFUBJM 
Textures 
•NCFEEFEJOUPUIF%HFPNFUSZöMFFHQFSWFSUFYDPMPVS 
•4UPSFEBTFYUFSOBM%öMFTFH67NBQT 
Post Process C - Visual enrichment of 3D models 
Several computer graphics techniques can be utilised for the visual enhancement of the 3D models produced from the 
geometric reconstruction processes. These guidelines focus on those techniques which provide a 3D simulation consistent with 
the visual and geometric characteristics of artefacts and monuments (reality capture) and other techniques, mainly used for the 
dissemination of 3D cultural data. The visual enrichment techniques described below are ordered from those that ensure a strong 
geometric consistency with the real object to techniques that introduce increasing approximations: 
•5FYUVSFFYUSBDUJPOBOEQSPKFDUJPOTUBSUJOHGSPNQIPUPHSBQITnely oriented on the 3D model (e.g. image-based 
modelling, photogrammetry) 
•5FYUVSJOHCZQIPUPHSBQIJDTBNQMFTPGUIFSFBMNBUFSJBMTPGthe artefact 
•5FYUVSJOHCZHFOFSJDTIBEFST 
•*OUFSBDUJWFPSTFNJBVUPNBUJDSFDPOTUSVDUJPOCBTFEPOrelevant proles 
•*OUFSBDUJWFPSTFNJBVUPNBUJDSFDPOTUSVDUJPOCBTFEPOprimitives adjustment 
•*OUFSBDUJWFSFDPOTUSVDUJPOCBTFEPOUFDIOJDBMJDPOPHSBQIZ(plans, cross-sections and elevations) 
•*OUFSBDUJWFSFDPOTUSVDUJPOCBTFEPOBSUJTUJDJDPOPHSBQIZ(sketches, paintings, etc.) 
Example of visual enrichment based on the projection of textures starting from photographs finely oriented on to a primitives 3D 
model (left) and the projection of panoramic imagery on organic 3D meshes (right) (CNRS-MAP / Discovery Programme) 
24
44 
Post Process D - Hypothetical reconstruction 
The hypothetical reconstruction of an architectural object or archaeological site to a previous state is a process primarily related 
to eld of historical studies. Nevertheless, some specic technical and methodological issues with 3D graphical representation of 
missing (or partially destroyed) heritage buildings are often integrated in 3D reconstruction approaches. While primarily related 
to the analysis of historical images and knowledge, the methodological approaches for the creation of hypothetical reconstruc-tions 
can be based on the integration of 3D metric data of existing parts of the object together with the reconstruction of the 
object’s shapes starting from graphical representations of the artefact/monument. Depending upon the source material available 
3D may be created based upon a combination of the following methods: 
•UIF%BDRVJTJUJPOPGFYJTUJOH	PSFYJTUFE
QBSUT 
•QSFWJPVT%TVSWFZTPGFYJTUJOH	PSFYJTUFE
QBSUT 
•OPONFUSJDJDPOPHSBQIJDTPVSDFTPGUIFTUVEJFEBSUFGBDU 
•JDPOPHSBQIJDTPVSDFT	NFUSJDBOEPSOPONFUSJD
SFMBUFEUPTJNJMBSBSUFGBDUT 
In addition where reconstructions are created the following recommendations should be taken into account: 
•*EFOUJGZUIFTDJFOUJöDBEWJTPS	T
XIPDBOHVJEFBOEWBMJEBUFUIF%NPEFMEVSJOHJUTSFDPOTUSVDUJPO 
•%PDVNFOUJOGPSNBUJPOBCPVUBEEJUJPOBMTPVSDFT	JNBHFSZBOECJCMJPHSBQIJDBMSFGFSFODFT
VUJMJTFEJOthe elaboration of 
the 3D model 
•$MFBSMZJOEJDBUFBOETBWFJOGPSNBUJPOJOEJDBUJOHUIFdegree of uncertainty e.g. information gaps within the 3D model. 
Example of 3D hypothetical reconstruction of a past state (CNRS-MAP) 
25
44 
Creating complementary 2D media (derived from the 3D model) 
During the creation of 3D models of artefacts complementary 2D media can also be produced. This 2D media can be pro-duced 
in dierent ways, depending on the type of 3D source (point cloud, geometric model, visually enriched 3D model), 
as well as on the nal visualization type (static, dynamic, interactive). This additional content can be used to visualise 
content which cannot be successfully visualised through an interactive 3D web model, e.g. renderings of highly detailed 
3D models or visualisation of full point cloud datasets. 
Static images 
•%SFOEFSJOHTPGWJTVBMMZFOSJDIFENPEFMTGSPNTFWFSBMQFSTQFDUJWFT 
•MFWBUJPOT
QMBOTBOETFDUJPOTPGQPJOUcloud data 
•*NBHFTIJHIMJHIUJOHTQFDJöDGFBUVSFTPGthe cultural object 
Animation 
•5VSOUBCMFWJEFP 
•'MZUISPVHIBOJNBUJPOBOEWJEFPUPVST 
•4USVDUVSBMBOJNBUJPOIJHIMJHIUJOHdierent components of an artefact or monument and their interrelationship 
•5FNQPSBMBOJNBUJPOIJHIMJHIUJOHUIFchronological change of a structure, e.g. animation from present day 
ruin back to reconstruction model 
Interactive Images 
•1BOPSBNBT 
•730CKFDUT 
26
44 
POST PROCESSING 
3D model 
CUrrent state 
IMAGeS - video video 
COMPLEMENTARY 2D MEDIA 
3D model 
12th CENTURY 
3D model 
12th CENTURY 
3D model 
10th CENTURY 
3D model 
11th CENTURY 
3D model 
11th CENTURY 
IMAGeS - video 
IMAGeS - video 
Complementary 2D media derived from the 3D model. Abbey of Saint-Michel de Cuxa (CNRS-MAP) 
images - detail 
images - 1 video 
27
44 
3D Publishing Methodology 
Capture MODELLING ONLINE DELIVERY 
METADATA 
LICENSING 
This section of the guidelines outlines the dierent methodologies and technical solutions for the 
optimal delivery and display of rich and complex 3D assets online. When evaluating publication formats 
the selection needs to consider the wide range in potential users from the general public to the 
researcher. Online publishing choice should be based upon the following criteria: 
•TFSWFBXJEFWBSJFUZPGOFFETBOEVTFST 
•NBYJNJTFUIF%VTFSFYQFSJFODF 
•GPDVTPOBDDFTTJCJMJUZQSPWJEJOHBOFBTZBOEJOUVJUJWFFYQFSJFODFGPSUIFVOFRVJQQFEVTFS 
•NBYJNJTFUIFBWBJMBCJMJUZPGUIF%DPOUFOUPOBTNBOZCSPXTFSQMBUGPSNTBTQPTTJCMF 
(desktop and mobile) 
•GPDVTPOTJOHMFSFMFBTFPGNPEFMTXIJDIDBOPQFSBUFPOBTNBOZQMBUGPSNTBOEPQFSBUJOHTZTUFNT 
to facilitate ecient  sustainable production 
•BWPJEVTFSTIBWJOHUPJOTUBMMBEEJUJPOBMTPGUXBSFPSQMVHJOT 
•TVQQPSUUIFDPODFQUPGSFTPVSDFFYQMPSBUJPO	FHJOUFHSBUFE63-T
 
Creators of 3D content will also need to consider if the online 3D models require le format conversion 
and optimisation procedures to enable their use online, to ensure a responsive and pleasant user 
experience. It is important to evaluate which is the most optimal approach, taking into account the 
potential eort required for le format conversions and optimisation procedures. 
28
44 
Online publication technologies 
A range of suitable solutions exist for the creation and publication of online 3D content, each with their benets, limitations 
and applicability to cultural heritage. 
3D model Objects Complex Buildings Sites 
Complexity Low High Low High Low High 
3D PDF 
HTML5/WebGL 
X3D 
Unity3D/UnReal 
Pseudo-3D 
3D PDF 
Yes 
Yes 
Optimised model 
Point cloud 
Optimised 
model Nexus/ 
point cloud 
Optimised model Optimised model 
Yes Yes 
Yes 
Yes 
Optimised model 
Optimised model 
Point cloud 
Optimised model 
Optimised model 
Yes Yes Yes 
Special 
cases 
(glass etc) 
The 3D PDF oers the ability to integrate 3D models and annotations within a PDF document. The 3D PDF format natively 
supports the Universal 3D (U3D) and Product Representation Compact (PRC) 3D le formats. The 3D PDF format was 
previously recommended within two EU projects: CARARE  Linked Heritage Project. 
Advantages include: 
•1SFEFöOFEWJFXTDBOCFFNCFEEFEGPSUIFVTFS
FHJOTJEFEJòFSFOUSPPNTPGBCVJMEJOH 
•.BKPSJUZPGVTFSTBMSFBEZIBWFB1%'WJFXFSTVDIBTDSPCBU3FBEFSJOTUBMMFEPOUIFJSDPNQVUFS 
•.PEFMTBSFSFMBUJWFMZFBTZUPVTF 
•.PEFMTBSFTFMGDPOUBJOFEBMMPXTUIFVTFPGBTJOHMF6OJGPSN3FTPVSDF*EFOUJöFS	63*
JOPSEFSUPEFöOFB 
complete 3D model 
•5IFVTFSJTQSPWJEFEXJUIMJNJUFEUPPMTUPNFBTVSFBOETFDUJPO 
•4VJUBCMFGPSEFTLUPQCSPXTJOHEJSFDUWJTVBMJTBUJPOPG%1%'POEFTLUPQDPNQVUFST 
•5FYUVSFTDBOVTFUIFKQFHDPNQSFTTJPOGFBUVSFPG%1%'UPSFEVDFöMFTJ[FT 
•EEJUJPOBMNFEJBDPOUFOUTVDIBTUFYU
WJEFP
JNBHFTDBOCFFNCFEEFEXJUIJOUIF1%' 
Disadvantages include: 
• When opening a 3D PDF documents through a browser, which is often the case with hyperlinked documents, 
dierent display behaviours occur, depending on the browser as 3D PDF not supported in web browser itself due to 
security issue 
•1%'NVTUCFWJFXFEXJUIJODSPCBU3FBEFSOPUPCWJPVTGPSOPOUFDIOJDBMVTFS 
•.PEFMTBSFOPUOPSNBMMZIJHIMZPQUJNJ[FEGPSPOMJOFVTFSFTVMUJOHJOMPOHEPXOMPBEUJNFTBOEJOBCJMJUZUPXPSLPO  
slower machines 
•5IFVTFPG%1%'PONPCJMFEFWJDFTSFRVJSFTUIFVTFPGBOQQXJUIMJNJUFEGVODUJPOBMJUZ 
29
44 
The main authoring platform is Acrobat Pro, which, 
in combination with the 3D PDF Converter plug-in 
(only on Windows) and additional software allows 
importing 3D models in a large number of le 
formats, and additional media. 3D PDF les can be 
created in Acrobat Pro without the Tetra4D 
Converter plug-in if one is capable of translating 
the 3D models into U3D le format (for example 
through MeshLab), this workow is available on 
both Mac and Windows. 
HTML5/WebGL Solutions 
With the advent of HTML5 and its associated 
WebGL JavaScript API the interactive rendering of 
3D visualisation can be achieved in a web browser 
without installing additional software or plugins 
by using the canvas element of HTML5. WebGL 
was utilised within the 3D-COFORM project as the 
3D PDF model of a stone high-relief depicting a hunter with a hare 
which is accompanied by a mastiff (Universidad de Jaén) 
method of choice for online 3D delivery. Most new HTML5/WebGL solutions use a cloud solution, in which the 3D models 
reside on servers of the company providing the visualisation software, but the nal model can be embedded on a normal 
HTML web page using canvas and iframes. 
Advantages include: 
•4VQQPSUFEBVUPNBUJDBMMZCZNBOZ)5.-EFTLUPQbrowsers (Chrome, Firefox, Opera, Internet Explorer), however, Safari 
browsers requires users to enable it 
•*ODSFBTJOHNPCJMFTVQQPSU	#MBDLCFSSZTNPCJMFbrowser fully supporting WebGL content and partial support on the 
Android Chrome browser) 
•MMPXTEJSFDUBDDFTTUPUIFHSBQIJDTQSPDFTTJOHVOJU(GPU) on the hardware display card present in the computer 
•T8FC(-VUJMJTFT)5.-UIFNJOJNVNrequirements of creating a WebGL application is a text editor and a web browser 
•$BOCFVTFEGPSUIFWJTVBMJTBUJPOPGQPJOUDMPVEEBUB 
Disadvantages include: 
•J04EPFTOPUDVSSFOUMZTVQQPSU8FC(-CVUGSPNiOS 8 this will be implemented and is currently being beta tested 
•4FDVSJUZDPODFSOTFYJTUBT8FC(-VUJMJTFTUIF(16BOEcan give a malicious program the ability to force the host 
computer system to execute harmful code 
•*UJTOPUTVQQPSUFECZPMEFSHSBQIJDTDBSET 
•5IFSFJTDVSSFOUMZBMBDLPGEFWFMPQNFOUenvironments 
30
44 
3D Model Type Software Comments 
Object/artefact 
Scene/building 
Point cloud 
•0OMJOFTUPSBHF.C	'SFF
(C	QBJE
 
•7JTVBMJTBUJPOTDBOVTFBMQIB
CVNQ
HMPTTZ
0
HMPX
EFUBJMBOE  
TQIFSJDBMSFøFDUJPONBQTJOQBJEWFSTJPO 
•0OMZPCKGPSNBUTVQQPSUFE 
•7JFXQPSUTIBEJOHPQUJPOBWBJMBCMF 
 
•0OMZQBJETFSWJDF 
•4VQQPSUGPSMBSHFNFTIFTXIJDIFYDFFE8FC(-USJBOHMFDPVOU 
•'MBTIBMUFSOBUJWFGPSOPO8FC(-CSPXTFST 
•0QUJPOPGWJFXFSQQGPSNPCJMFEFWJDFT 
•%TUSFBNJOHDBQBCJMJUZGPSNVMUJSFTPMVUJPONPEFMT 
•/FYVTGPSNBUBCJMJUZUPDPNQSFTTJPOBOETUSFBNJOH%DPOUFOUUIBU 
SFöOFTHSBEVBMMZEFQFOEJOHVQPOUIF[PPNMFWFMPGUIFVTFS 
•-JNJUFEUPDPMPVSQFSWFSUFYEBUB 
•6TFSIBTUIFBCJMJUZUPEZOBNJDBMMZBEKVTUUIFMJHIUJOHQPTJUJPO 
•óDJFOUQSFTFOUBUJPOPGETDBOEBUBBTTJNQMJöDBUJPOPQUJNJTBUJPO 
QSPDFTTJOHOPSNBMMZSFRVJSFEUPWJFXEFUBJMFENPEFMTJTOPUSFRVJSFE 
•0òFSTBVUPNBUFEQTFVEP%TPMVUJPOTGPSCSPXTFSTXIFSF8FC(-JT 
OPUBWBJMBCMF 
•'SFFBOEBCJMJUZUPJNQPSUEJòFSFOUöMFGPSNBUT 
•7JTVBMJTBUJPOTDBOVTFTUBOEBSEEJòVTF
TQFDVMBSBOETIJOJOFTT 
QBSBNFUFST
BOEMJHIUNBQT 
•OOPUBUJPOPGPCKFDUTBWBJMBCMF 
•4VJUBCMFGPSDPNQMFYTDFOFTSFRVJSJOHXBMLUISPVHIBOEHVJEFEUPVST 
•$PNQSFIFOTJWFCVJMUJOGFBUVSFTGPSUPJOUFSBDUXJUICVJMEJOHPSTJUF  
	DPMMJTJPOEFUFDUJPO
XBMLJOHPOTVSGBDF 
•-JHIUNBQQJOHBOEQBSUJDMFFòFDUTTVQQPSUFE 
•$POUFOUBWBJMBCMFUISPVHINPCJMFBQQT 
•$IBSBDUFSBOJNBUJPOBWBJMBCMF 
•4VQQPSUTJNQPSUPGöMFGPSNBUT 
•4VQQPSUTJNNFSTJWFFOWJSPONFOUXJUI0DDVMVT3JGU8JOEPXTBOE 
.BDBWBJMBCJMJUZ 
•CJMJUZUPMPBEJODSFNFOUBMMZ-0%QPJOUDMPEEBUB 
•6TFSIBTBCJMJUZUPBEKVTU-0%
BOEQPJOUTJ[FXJUIJOWJFXFS 
QPJOUDMPVEEBUBTFUNVTUCFQSPDFTTFEVTJOHBGSFFDPOWFSTJPOUPPM 
•0QFOTPVSDFTPMVUVJPO 
P3D 
Big Object Base 
(BOB) Publish 
3DHOP 
SketchFab 
CopperCube 3D 
Potree 
31 
A range of applications exist for WebGL-based 3D typically storing the 3D data in the Cloud based servers and providing 
visualisation of the 3D content.
44 
3D Viewer: Potee / TeraPoints 
This is in a freeight mode viewer. To move your point of view” click  drag 
To move the model: alt + click  drag 
Use the arrow keys to ‘y’ 
To move faster, move your mouse wheel up, move it down to slow down. 
Tholos in Delphi, Greece 3D point cloud model viewed on-line using the Potree WebGL viewer (CNRS-MAP) 
Retopologised light weight model of the Market Cross, Glendalough 
viewed in the SketchFab online WebGL viewer (Discovery Programme) 
Capital in Nexus on-line viewer format from the Cefalu cloister in Sicily, Italy (ISTI-CNR) 
32
44 
X3D 
X3D is the technological successor and extension to VRML which is recognised by the International Organisation for 
Standardization (ISO). Currently X3D provides native authoring and use of declarative XML-based X3D scenes which can be 
viewed within a HTML5 web browser, and provides Extensible Markup Language (XML) capabilities within 3D to integrate with 
other WWW technologies. 
Advantages include: 
•QSPWJEFTFOIBODFE%WJTVBMJTBUJPODBQBCJMJUJFTNVMUJTUBHFBOENVMUJUFYUVSFSFOEFSJOH
MJHIUNBQTIBEFST
SFBMUJNF 
reection, Non-uniform rational basis splines (NURBS) 
•6UJMJTJOH+BWB4DSJQUMJCSBSZ9%0.
9%TDFOFTCFDPNFQBSUPGUIF)5.-%0. 
•9%0.DBOCFVTFEJOUPPG8FC(-UIFSFGPSFJUDBOCFSVOEJSFDUMZJOUIFCSPXTFS
XJUIPVUBOZQMVHJO 
•%NPEFMTDBOVUJMJTFUIF(/6H[JQDPNQSFTTJPOUPSFEVDFUIFJSöMFTTJ[F 
The 3D model of the Metope Sele heraon displayed within 
an X3D viewer (Fondazione Bruno Kessler) 
Disadvantages include: 
•%NPEFMJTDPOTUSVDUFEGSPNNVMUJQMFöMFT 
therefore les structure is not contained and cannot be 
referenced via a single URI 
•$PNQBUJCJMJUZJTTVFTFYJTUBT%NPEFMXJUIUIFTQFDJöD 
viewer multiple les e.g. texture maps required to 
construct scene 
The X3D format provides a wide range of authoring tools for 
the production of X3D models or with X3D export functions 
including open source (Blender and Meshlab) and paid 
solutions (AC3D). 
Unity - Serious Games Solutions 
Technology solutions developed for the provision of online gaming activities can be utilised for the visualisation and 
exploration of cultural heritage objects. Unity is one such game platform which can provide a solution to providing rich 3D 
environments for users. 
Advantages include: 
•EWBODFEWJTVBMJ[BUJPOGFBUVSFTJODMVEJOHSFBMUJNFHMPCBMillumination, reection probes, physically based shading, 
ability to embed audio, complex animation 
•$BOCFVUJMJTFEPOBMMNBKPSEFTLUPQQMBUGPSNT	8JOEPXT
.BD04
Linux) and all major mobile platforms (Android, iOS, 
Windows Phone, Blackberry) 
•$PMMJTJPOEFUFDUJPO
UIFOPUJPOPGHSPVOETVSGBDFTBOEJOUFSBDUJWFPCKFDUTFHEPPST 
•1SPWJEFTVTFSTXJUIFYQMPSBUPSZBOEOPOMJOFBS%TQBDF 
•(BNJOHFOWJSPONFOUJTFBTZUPVTFGPSOPOUFDIOJDBMVTFST 
33
44 
•7FSZTVJUBCMFGPSUIFQSPWJTJPOPGJOUFSBDUJWFNPEFMTPGIFSJUBHFTQBDFT	FHCVJMEJOHTBOEBSDIBFPMPHJDBMTJUFT
 
•8FCQVCMJTIJOHJTGSFF 
•-BSHFDPNNVOJUZPGVTFSQSPWJEJOHBEEJUJPOBMUPPMTBOEQMVHJO 
Disadvantages include: 
•$VSSFOUMZ6OJUZWSFRVJSFTBplug-in to be installed on the user’s machine, however, from the release of Unity v.5 online 
publishing within HTML5 capabilities will be available 
•$PTUPGTPGUXBSFSFRVJSFEUPauthor 3D scenes if Pro functions required 
Other game engine platforms adopted for serious gaming such as the Unreal Development Kit (UDK) are available; however, 
most require the installation of an additional plug-in. 
Unity3D test on the 3D virtual reconstruction of the Ename abbey in 1300 (VisDim) 
Pseudo3D (ObjectVR) solutions 
Pseudo3D provides the user with a near to 3D experience by allowing the user to navigate interactively through a series of 
images taken at dierent orientations which mimics real 3D visualisation. Psuedo3D can provide solutions to view 360 
panoramas or to provide an orbital view of an object (ObjectVR). Pseudo3D solution is a valuable tool for online display where: 
•$PNQMFY%NPEFMTDBOOPUCFSFOEFSFEPOMJOFJOSFBMUJNFFHMBSHFQPJOUDMPVENPEFMT 
•%EJHJUJTBUJPOPGBOPCKFDUJTOPUQPTTJCMF
CVUQIPUPHSBQIZXJUIUIFSFTVMUJOHJNBHFTDPNCJOFEBMMPXUIFVTFSB 
pseudo3D experience 
•*UQSPWJEFTBTPMVUJPOGPSVTFSTUIBUIBWFIBSEXBSFXJUIWFSZMJNJUFEHSBQIJDBMDBQBCJMJUJFT 
•1BSUJDVMBSMZTVJUBCMFGPSUIFEJTQMBZPGDPNQMFY%BSUFGBDUNPEFMT 
34
44 
Several software solutions are available to construct Object VR visualisations (Flashicator, BoxshotVR, Object2VR, Krpano) all 
which can produce content via HTML5 (use of QuicktimeVR requires a plugin and is therefore not suggested). Many of these 
tools also oer the user the ability zoom into the object and closely inspect the models if high resolution images are used to 
create the ObjectVR. However, one limitation to this solution is its ability to conne the user to visualise the object through a 
predened paths. 
Two images from an ObjectVR visualisation of the abbey of Ename in 1665 (by VisDim) 
Remote Rendering 
Interactive remote rendering uses the combination of an interactive low resolution 3D model (visualised through WebGL) with 
rendering the corresponding high resolution 3D model on a remote server and sending just the rendered image to replace the 
low resolution WebGL visualisation. An example of this application is the Venus 3D model publishing system (CCRM Lab). 
Advantages include: 
•UIF%NPEFMEPFTOPUOFFEUPCFUSBOTGFSSFEPWFSUIFJOUFSOFUBOESFTJEFPOUIFVTFSTDPNQVUFS
POMZUIFKQFH 
image is transferred 
•VTFSIBTUIFBCJMJUZUPEZOBNJDBMMZBMUFSMJHIUJOHQPTJUJPOBOEJOTQFDUEFUBJMFE%NPEFMT 
Disadvantages to this method include: 
•DFOUSBMJTFEIPTUJOHPGUIFTFSWJDFXPVMEJODVSPOHPJOHDPTUT 
•UIFMBHUJNFUIFFYQFSJFODFEXIJMTUXBJUJOHGPSUIFSFOEFSUPPDDVSDBOCFRVJUFPòQVUUJOHGPSUIFVTFSBOEJT 
dependent upon the user’s internet speed 
IPR Considerations for online publishing 
An additional consideration for online publication is the IPR implications of the 3D models. Although the ability to potentially 
“steal” 3D models  visualisation should not be considered as a major threat, several factors should be considered depending 
upon the publication method including: 
•%öMFTXIJDISFTJEFPOBTFSWFSBOEDBOCFEPXOMPBEFEGPSWJTVBMJTBUJPOFH%1%'DPVMEQPUFOUJBMMZCFSFVTFBOE 
altered by the user. Password protection is available to encrypt the data, although there is the potential to bypass this 
and extract the 3D model 
•8IFSF%NPEFMTBSFPòFSFEUPUIFVTFSUISPVHIBOFNCFEEFE)5.-TFSWJDFBOEUIF%EBUBJTIPTUFECZBUIJSE 
party (e.g. SketchFab) care must be taken to inspect their rights on the uploaded data 
35
44 
METADATA 
Capture MODELLING ONLINE DELIVERY 
METADATA 
LICENSING 
Running in parallel to the 3D capture, modelling and publication activities, the creation of metadata is 
essential to the success of the processing pipeline. The metadata created within the pipeline provides 
key information and context data to ve key areas: 
1. It describes in detail the artefact or monument which is being modelled in 3D and its provenance 
2. It describes in detail the digital representation of the artefact or monument and its online location 
3. It provides technical information and quality insurance on the processes and methods utilised in 
the digitisation and modelling of heritage objects 
4. It provides information on the access, licensing and reuse of the created 3D models and any 
associated digital content 
5. It enables the search, discovery and reuse of content through the mapping of metadata to 
aggregators e.g. Europeana Data Model (EDM) 
36
44 
CARARE 2.0 Metadata Schema 
To construct a comprehensive metadata record for digital content created through the pipeline, which adheres to the ve key 
areas described above, the CARARE 2.0 Metadata scheme was selected. The CARARE metadata schema was developed during 
this EU co-funded three-year project which addressed the issue to make digital content, including information about 
archaeological monuments, artefacts, architecture, landscapes, available to Europeana’s users. The CARARE schema works like 
an intermediate schema between existing European standards and the EDM by: 
•FOTVSJOHJOUFSPQFSBCJMJUZCFUXFFOUIFOBUJWFNFUBEBUBIFMECZIFSJUBHFPSHBOJTBUJPOTBOEVSPQFBOB 
•DSFBUJOHBNFUBEBUBTDIFNBBCMFUPNBQUIFFYJTUJOHPSJHJOBMNFUBEBUBJOUPBDPNNPOPVUQVUTDIFNB 
•TVQQPSUJOHUIFGVMMSBOHFPGEFTDSJQUJWFJOGPSNBUJPOBCPVUNPOVNFOUT
CVJMEJOHT
MBOETDBQFBSFBTBOE 
their representations 
The CARARE schema is focussed on a heritage asset and its relations to digital resources, activities and to collection 
information. The fundamental elements within its structure are: 
CARARE Wrap - the CARARE start element. It wraps the Heritage Asset with the other information resources 
Heritage asset Identication (HA) – the descriptive information and metadata about the monument, historic 
building or artefact. The ability to create relations between heritage asset records allows the relationships between 
individual monuments that form parts of a larger complex to be expressed 
Digital resource (DR) – these are digital objects (3D models, images, videos) which are representations of the heritage asset 
and are provided to the services such as Europeana for reuse 
Activity (A) - these are events that the heritage asset has taken part in, in this case this is used to record the data capture and 
3D modelling activities (paradata) which are utilised to create the 3D content 
Collection (C) – this is a collection level description of the data being provided to the service environment (Europeana) 
Graphical example of the relations among the different themes (Heritage Asset, Digital Resources and Activities) of CARARE 2.0 
37
44 
Object digital assets relationship within CARARE 
The creation of metadata for cultural heritage objects and their associated digital heritage assets, (3D models, images and 
videos) should adhere to the following approach to capture the relationship between digital replicas and their original 
monuments or artefacts. 
is_replica_of 
HA 
ACTIVITY 
ACTIVITY 
3d model HIGH Resolution 
HA 
ACTIVITY 
3d Model LOW Resolution 
is_derivative_of 
Diagram illustrating approach to metadata creation for multiple derivatives from a single cultural heritage object 
Paradata 
A specic form of metadata which is recommended within the 3D documentation process is the paradata. Paradata is 
information and data which describes the process by which the 3D data was collected, processed and modelled and can act 
as a quality control audit for the data. Examples of paradata include: 
•UZQFPG%EBUBDPMMFDUJPOUFDIOJRVF	JNBHFCBTFEPSSBOHFCBTFE
 
•UZQFPGFRVJQNFOU	NPEFMPGUIFDBNFSB
MFOTFTVTFE
USJBOHVMBUJPOPS50'MBTFSTDBOOFS
FUD
 
•XIJDITPGUXBSFBQQMJDBUJPOTXIFSFVTFEUPQSPDFTTUIFEBUB 
38 
THE PARTNER HAS ONE or multiple 3D digital models as replicas of one physical object 
= the physical object 
= discovery, restoration, change in ownership 
= image_is_shown_at (landingPage of the physical object 
HA 
= 3d model of the physical object 
HIGH RESOLUTION 
= 3d model of the physical object 
LOW RESOLUTION 
=3d model of the physical object 
Virtual Reconstruction 
3d Hypothetical Model 
ACTIVITY 
DR 
DR 
DR 
DR 
HA 
is_derivative_of
44 
The recording of paradata can be achieved both automatically and systematically during the survey process. Where possible, 
paradata data created by capture devices, e.g. exif information from cameras should be utilised. For all additional paradata 
information the use of standardised paradata recording sheets should be utilised to ensure systematic recording of 
techniques, equipment and processes. An example paradata recording sheet created as part of the 3D-ICONS project is 
available online for reuse at the project website. In terms of inclusion of the paradata within the overall metadata schema, 
all paradata created can be mapped into the Activity component of the CARARE metadata schema. 
Standardised vocabulary 
Where possible standardised vocabularies and their associated persistent uniform resource identier (URI) should be utilised 
within the metadata to develop and promote the use of semantic tools enabling interoperability, integration and the 
migration of the digital resources in the Linked open data Format Standardised vocabulary. 
CARARE Theme Relevant resources 
CARARE Theme 
Actor 
Concepts 
Spatial Data 
FOAF (http://www.foaf-project.org/) 
DBpedia (http://dbpedia.org/About) 
Gemet Thesaurus (http://www.eionet.europa.eu/gemet) 
Getty Thesaurus (http://www.getty.edu/research/tools/vocabularies/aat/) 
HEREIN Thesaurus (http://thesaurus.european-heritage.net/herein/thesaurus/) 
ICCD/Cultura Italia Portal (http://www.culturaitalia.it/) 
Linked Data Vocabularies for CH (http://www.heritagedata.org/) 
GeoNames (http://www.geonames.org/) 
Getty Thesaurus of Geographic Names (http://www.getty.edu/research/tools/vocabularies/tgn/) 
Ancient Place names - Pleiades (http://pleiades.stoa.org/) 
Table summarising available recognised ontologies  thesauri which can be used in metadata creation for cultural heritage objects 
39
44 
Resources for Metadata Creation 
The actual process of metadata creation can be achieved using two dierent application paths: 
Illustration of the different strategies in the implementation of metadata creation 
Strategy 1: Metadata Creation Tool 
For those institutions and organisation which have no previous descriptive data relating to their collections to map, or have 
little experience in the production of XML metadata records the creation of metadata can be achieved utilising the online 
3D-ICONS Metadata creation tool. Available online (http://orpheus.ceti.gr/3d_icons), the tool provides the user with the ability 
to dene separate building blocks of the CARARE metadata data schema: 
• Organization – The organisation(s) with the responsibility for the 3D digital object assets 
• Collection – A description of the overall 3D collection made available 
• Actor – The person/people who have carried out the data collection and processing tasks, e.g. geo-surveyor 
• Activity – Descriptive detail of the digitisation and modelling activities utilised, e.g. terrestrial laser scanning 
• Spatial data – Geographical location of the cultural heritage object 
• Temporal data – chronological period or date associated with the cultural heritage object 
• Digital Resources – Description of the digital representation le, e.g. jpeg image 
40 
MAPPING 
MINT2 MORe2 
METADATA EDITOR 
INGESTION - PUBLICATION 
CREATION METADATA FROM SCRATCH 
ADDING MISSING FIELDS 
STRATEGIES 
1 
2 
3 
DB 
LEGACY 
DATA
44 
Defining a Heritage Asset within the metadata creation tool 
View of associated digital assets within the metadata creation tool 
41
44 
Strategy 2: MINT2 Mapping Tool 
For those organisations which have their metadata already created and contained within their own formalised cataloguing 
management software, e.g. museum collections databases, this can be reused to form the main component of the CARARE 
metadata record. To achieve this, the MINT 2 metadata services tool can be employed. 
MINT 2 services comprise of a web based platform that is designed and developed to facilitate aggregation initiatives for 
cultural heritage content and metadata in Europe. The platform oers an organisation a management system that allows the 
operation of dierent aggregation schemes (thematic or cross-domain, international, national or regional) and corresponding 
access rights. Registered organizations can upload (http, ftp, OAI-PMH) their metadata records in xml or csv format in order to 
manage, aggregate and publish their collections. 
The CARARE metadata model serves as the aggregation schema to which the ingested data is mapped. Users can dene their 
metadata crosswalks from their own schema to CARARE with the help of a visual mappings editor utilising a simple drag-and-drop 
function which creates the mappings. The MINT tool supports string manipulation functions for input elements in order 
to perform 1-n and m- mappings between the two models. Additionally, structural element mappings are allowed, as well 
as constant or controlled value (target schema enumerations) assignment, conditional mappings (with a complex condition 
editor) and value mappings between input and target value lists. Mappings can be applied to ingested records, edited, 
downloaded and shared as templates between users of the platform. 
Screen shot of the mapping procedure within MINT 2 
Once mapped the MINT tool preview interface enable the user to visualise the steps of the aggregation including the current 
input xml record, the XSLT of their mappings, the transformed record in the target schema, subsequent transformations from 
the target schema to other models of interest (e.g. Europeana’s metadata schema), and available html renderings of each 
xml record. 
42
44 
Visualization of the mapped record metadata record in MINT 2 
Relating Metadata to Europeana – MoRe 2.0 
Once the metadata record packages have been created by either the online metadata tool or the MINT 2 service these are 
transformed into the EDM, and delivered to Europeana using the Monument Repository (MoRe2) services. The MoRe 2 
repository aggregator tool also enables ingested metadata records to be validated against specic quality control criteria, 
e.g. correct spatial coordinates are utilised for the spatial location. 
The MoRe 2 system also provides users with summary statistics of their metadata records including the number of Heritage 
Assets ingested and the number and type of digital media objects referenced, e.g. images, 3D models. Once validated and 
ingested metadata data records can then be easily published to Europeana with the click of a button. 
Screen capture of the MORE 2.0 tool displaying ingested metadata packages 
43
44 
LICENSING  IPR Considerations 
Capture MODELLING ONLINE DELIVERY 
METADATA 
LICENSING 
In order for the eective sharing and reuse of 3D content of heritage objects a common framework is 
required to establish best practice in the management and licensing of 3D models and any associated 
digital objects (video, metadata  images). Understandably many institutions have the concern that 
providing access to 3D content could potentially erode their commercial rights to the data. 
The standardised IPR scheme presented: 
•*EFOUJöFTUIFLFZEBUBBOESFMBUJPOTIJQTXIJDISFRVJSFNBOBHFNFOU 
•1SPWJEFTSPCVTUMJDFODFTUPSFUBJODPNNFSDJBMSJHIUTUPUIFEBUBXIJMTUFOBCMJOHSFVTFGPS 
educational and research activities 
•$PMMBUFTTVJUBCMFNFUBEBUBXJUIBOBQQSPQSJBUF	$SFBUJWF$PNNPOT$$
MJDFOTJOHTUSVDUVSFGPS 
submission to Europeana 
•YBNJOFTUIFLFZDPQZSJHIUDIBMMFOHFTGBDFECZBMMQBSUJFTJOWPMWFEJOUIFQSPDFTTPGDBQUVSJOH
 
processing, developing and presenting digital content 
44
44 
IPR  the 3D pipeline 
The creative processes and activities involved in this 3D pipeline results in the generation of Intellectual Property Rights 
(IPR) at many junctions. The development of a suitable IPR model is relevant at all stages of the pipeline, from the earlier 
phases which are dominated by controlled access rights, to the later phases where substantial eort is invested in the 
modelling of captured 3D data to produce rich and eective 3D heritage content. This is important in terms of recognising 
that while the content providers may control access, it is the later processes that have the highest costs and greatest IPR. 
Illustration of the Object activity chain identifying the range of people and organisations involved in creating 3D content for cultural 
heritage 
The IPR scheme proposed here is integrated into all the activities of the3D modelling pipeline from initial data capture 
to the delivery of 3D heritage content online. Within the pipeline several key actors and organisations are dened: 
• Monument/artefact Manager – organisation who are the custodians or owners of the heritage object, e.g. museum 
• Imaging Partner – company or institution which carries out the primary 3D data capture of the heritage object 
• 3D Development Partner - company or institution which executes the 3D data modelling of the heritage object for 
delivery online 
• Distribution Partner – organisation which hosts 3D content for public use 
• Commercialisation Partner – company which wishes to establish a potential revenue path for 3D data 
Within the processing pipeline there are several milestones where IPR agreements need to be applied. 
45
44 
Access Agreement 
At the start of the pipeline , where Imaging Partners capture 3D data of a monument or artefact in the ownership or 
management of a third party (e.g. National Heritage organisation) it is good practice to establish an Access agreement. 
This agreement outlines both the arrangements in place to physically access the site/museum to capture the data, and what 
level of control each party has over the initial survey data captured. Depending upon who is funding the work two standard 
agreements are possible: 
 Full or co-funding for capture provided by Imaging Partner - non-exclusive licenses for both parties to make use of the 
primary data with the IPR resting with the Imaging Partners 
 Full funding provided by Heritage Organisation - assignation of the IPR by the Imaging Partners to the heritage body 
It is also important to clearly state the IPR on any subsequent 3D content derived from the original captured data as these are 
new and distinct data sets and often require signicant amounts of eort to produce the nal deliverable 3D model. 
Derivatives Agreements 
Depending upon the attitude of the Imaging Partner to data sharing, the original 3D capture data (e.g. high quality 
point cloud data) will not normally be publicly accessible. However when new products are derived by a third party a 
Business-2-Business (B2B) derivative agreement will be required. For organisations where the data capture and 3D modelling 
is carried out within the same institution no additional derivative agreement is required. 
Metadata Agreements 
Where metadata and paradata is provided by 3D content creators to third parties such as Europeana for the purpose of 
increasing the visibility and reuse of the 3D models a Creative Commons (CC0) License is usually adopted. The metadata 
agreement will not interfere with any subsequent commercialisation of content by the rights holder. 
Public Use Agreements 
The 3D models and other associated derived products such as videos and images will normally be made widely available to 
the public using a more restrictive arrangement than the metadata agreement to retain control over potential commercial 
and inappropriate future reuse. This will be dependent upon the policy of the 3D content creator organisation and can range 
from the restrictive (paid access - no reuse) to the liberal (CC0) but is likely that organisation would like to retain some 
potential commercial value in their models. It is recommended that organisations at least apply the Creative Commons 
Attribution-Non-Commercial-No-Derivatives (CC-BY-NC-ND) license to their model which allows for the redistribution and 
non-commercial reuses, as long as the 3D content is unchanged and credits the creator organisation. The full range 
of potential rights statements available through European can be found at http://pro.europeana.eu/web/guest/ 
available-rights-statements. 
Commercial Agreements 
Final 3D models, additional content (videos and rendered images) and supplementary data created within the 3D pipeline 
process have the potential to be commercialised. Licensing models to commercial image libraries or directly to end users can 
help fund the creation of higher quality models and may well be in the interest of all parties – as once created resources may 
be used commercially and non-commercially. These agreements are a critical part of stimulating an added value chain so that 
original survey work can reach its full potential. 
46
44 
(creates 1st generation content + IPR) 
3. Derivative 
Agreement 
2. Metadata 
Agreement who, what, when, where 
visualisations made 
available online 
(hotels distributable visuals) 
Visualisation of the different agreements and license structures which can be utilised during the capture, 
modelling and reuse of 3d cultural heritage modelsin creating 3D content for cultural heritage 
47 
CONTENT PARTNER 
IMaging partner EUROPEANA 
DEVELOPMENT PArtner 
DISTURBING PArtner 
SALES PArtner 
objects and sites 
provenance 
archives 
accreditation 
1. Access 
Agreement 
3D data, photography, 
supporting materials 
3D data, photography, 
texture, maps, digital merchandise, 
physical merchandise 
(generates additional IPR and 
creates 2nd generation content) 
4. Public Use 
Agreement 
Fullment, distribution 
(establishes revenue paths for materials) 
5. Commercial 
Agreement 
(portal and search engine) 
(access to assets and original IPR)
44 
Creative Commons Key Facts 
Founded in 2001 and thanks to the proliferation of the internet and web sites 
like Wikipedia, Creative Commons has become one of the most recognised 
licensing structures available. As this also forms the IP structure for Europeana. 
Enables the sharing and use of creativity and knowledge through free, public, 
and standardized infrastructures and tools that creates a balance between the 
reality of the Internet and the reality of copyright laws. 
Creative Commons licenses require licensees to get permission to do any of the 
things with a work that the law reserves exclusively to a licensor and that the 
license does not expressly allow. 
Creative Commons Licensees must credit the licensor, keep copyright notices 
intact on all copies of the work, and link to the license from copies of the work. 
CC Licenses are available from a fully open license where users can copy, 
modify, distribute and perform the work, even for commercial purposes, all 
without asking permission (C00) to the restrictive CC BY-NC-ND where others 
can download your works and share them with others as long as they credit 
you, but they can’t change them in any way or use them commercially. 
48
44 
Increased reuse restriction 
Public Domain - CC0 
Attribution - CC BY 3.0 
Attribution-ShareAlike - CC BY-SA 3.0 
Attribution-NoDerivs - CC BY-ND 
Attribution-NonCommercial - CC BY-NC 
Attribution-NonCommercial-ShareAlike - CC BY-NC-SA 
Attribution-NonCommercial-NoDerivs - CC BY-NC-ND 
49
Appendix 1: Additional 3D-ICONS Resources 
Project Reports 
D2.1 Digitisation Planning Report, Paolo Cignioni (CNR) and Andrea d’Andrea (CISA) 
D2.3 Case Studies for the Testing the Digitisation Process, Anestis Koutsoudis, Blaz Vidmar and Fotis Arnaoutoglou (CETI) and 
Fabio Remondino (FBK) 
D3.1 Interim Report on Data Acquisition, Gabriele Guidi (POLIMI) 
D3.2 Final Report on Data Acquisition, Gabriele Guidi (POLIMI) 
D4.1 Interim Report on Post-processing, Livio de Luca (CNRS-MAP) 
D4.2 Interim Report on Metadata Creation , A. D’Andrea (CISA) with the collaboration of R. Fattovich and F. Pesando (CISA), A. 
Tsaouselis and A. Koutsoudis (CETI) 
D4.3 FinalReport on Post-processing, Livio de Luca (CNRS-MAP) 
D5.1 3D Publication Formats Suitable for Europeana , Daniel Pletinckx and Dries Nollet (VisDim) 
D5.2 Report on publication, Daniel Pletinckx and Dries Nollet (VisDim) 
D6.1 Report on Metadata and Thesauri Andrea d’Andrea (CISA) and Kate Fernie (MDR) 
D6.2 Report on Harvesting and Supply, Andrea d’Andrea (CISA) and Kate Fernie (B2C) 
D7.1 Preliminary Report on IPR Scheme, Mike Spearman, Sharyn Emslie (CMC) 
D7.2 IPR Scheme, Mike Spearman, Sharyn Emslie and Paul O’Sullivan (CMC) 
D7.4 Report on Business Models, Mike Spearman, James Hemsley, Emma Inglis, Sharyn Emslie and Paul O’Sullivan (CMC) 
All Project reports are available at fro the 3D-ICONS website at the following 
URL: http://3dicons-project.eu/index.php/eng/Resources 
Publications 
D’Andrea, A., Niccolucci, F. and Fernie K., 2012. 3D-ICONS: European project providing 3D models and related digital content to 
Europeana, EVA Florence 2012. 
D’Andrea, A., Niccolucci, F., Bassett, and Fernie, K., 2012. 3D-ICONS: World Heritage Sites for Europeana: Making Complex 3D 
Models Available to Everyone, VSMM 2012. 
D’Andrea, A., Niccolucci, F. and Fernie K., 2013. CARARE 2.0: a metadata schema for 3D Cultural Objects. Digital Heritage 2013, 
International Congress, IEEE Proceedings. 
D’Andrea, A., Niccolucci, F. and Fernie K., 2013. 3D ICONS metadata schema for 3D objects, Newsletter di Archeologia CISA, 
Volume 4, pp. 159-181, 
Callieri, M., Leoni, C., Dellepiane, M. and Scopigno, R., 2013. Artworks narrating a story: a modular framework for the 
integrated presentation of three-dimensional and textual contents, ACM WEB3D - 18th International Conference on 
3D Web Technology, page 167-175 
pdf: http://vcg.isti.cnr.it/Publications/2013/CLDS13/web3D_cross.pdf 
50
4 
Dell’Unto, N., Ferdani, D., Leander, A., Dellepiane, M. and Lindgren, S., 2013. Digital reconstruction and visualization in 
archaeology Case-study drawn from the work of the Swedish Pompeii Project, Digital Heritage 2013 International 
Conference, page 621-628 
pdf: http://vcg.isti.cnr.it/Publications/2013/DFLDCL13/digitalheritage2013_Pompeii.pdf 
Gonizzi Barsanti, S. and Guidi, G., 2013. 3D digitization of museum content within the 3D-ICONS project, ISPRS Ann. 
Photogramm. Remote Sens. Spatial Inf. Sci., II-5/W1, pp. 151-156. 
Online: www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/II-5-W1/151/2013/ 
Gonizzi Barsanti, S., Micoli, L.L., Guidi, G., 2013. 3D Digitizing a whole museum: a metadata centered workow, 2013 Digital 
Heritage International Congress (DigitalHeritage), Vol. 2, pp. 307-310, IEEE, ISBN 978-1-4799-3169-9. 
Guidi, G., Rodríguez Navarro, P., Gonizzi Barsanti, S., Loredana Micoli, L., Russo, M., 2013. Quick textured mesh generation in 
Cultural Heritage digitization, Built Heritage 2013, Milan, Italy, pp. 877-882, [Selected for printed publication]. 
Guidi, G., Rodríguez Navarro, P., Gonizzi Barsanti, S., Loredana Micoli, L., Russo, M., 2013. Quick textured mesh generation in 
Cultural Heritage digitization, Built Heritage 2013, Milan, Italy, pp. 877-882, [Selected for printed publication]. 
Online: http://www.bh2013.polimi.it/papers/bh2013_paper_324.pdf 
Hermon, S., Bakirtzis, N., Kyriacou, P., 2013. 3D Documentation – Analysis - Interpretation, Digital libraries of 3D data – 
access and inter-operability, and The cycle of use and re-use of digital heritage assets., Scientic Support for Growth  Jobs 
(2013): Cultural and Creative Industries, Brussels, Belgium., Session: posters and presentation. 
Hermon, S., Ben-Ami, D., Khalaily, H., Avni, G., Iannone, G., Faka, M., 2013. 3D documentation of large-scale, complex 
archaeological sites: The Givati Parking excavation in Jerusalem, Conference Proceedings, Digital Heritage 2013, Marseilles, 
France, vol 2, Session: Documentia. Digital Documentation of Archaeological Heritage, pp. 581 
Hermon, S., Niccolucci, F.,Yiakoupi, K., Kolosova, A., Iannone, G., Faka, M., Kyriacou, P., Niccolucci, V., 2013. Documenting 
Architectonic Heritage in Conict Areas. The case of Agia Marina Church, Derynia, Cyprus, Conference Proceedings, Built 
Heritage 2013, Monitoring Conservation Management, Milan, Italy, 20 November, pp. 800 - 808. 
Available:http://www.bh2013.polimi.it/papers/bh2013_paper_216.pdf [20 Dec 2013]. 
Hermon, S., Khalaily, H., Avni, G., Reem, A., Iannone, G., Fakka, M., 2013. Digitizing the Holy – 3D Documentation and 
analysis of the architectural history of the “Room of the Last Supper” – the Cenacle in Jerusalem, Conference Proceedings, 
Digital Heritage 2013, Marseilles, France, vol 2, Session 3−Architecture, Landscape: Documentation  Visualization, 
pp. 359 - 362. 
Jiménez Fernández-Palacios, B., Remondino, F., Lombardo, J., Stefani, C. and L. De Luca, 2013. Web visualization of complex 
reality-based 3D models with Nubes, Digital Heritage 2013 Int. Congress, IEEE Proceedings. 
Leoni, C., Callieri, M., Dellepiane, M. Rosselli Del Turco, R. and O’Donnell, D., 2013. The Dream and the Cross: bringing 3D 
content in a digital edition, Digital Heritage 2013 International Conference, page 281-288 - October 2013 
pdf:http://vcg.isti.cnr.it/Publications/2013/LCDRO13/DreamAndTheCross.pdf 
Niccolucci, F., Felicetti, A., Amico, N. and D’Andrea, A., 2013. Quality control in the production of 3D documentation of 
monuments, Built Heritage 2013, proceedings 
http://www.bh2013.polimi.it/papers/bh2013_paper_314.pdf 
Remondino, F., Menna, F., Koutsoudis, A., Chamzas, C. and El-Hakim, S., 2013. Design and implement a reality-based 3D 
digitisation and modelling project”. Digital Heritage 2013 Int. Congress, IEEE Proceedings. 
Ronzino, P., Niccolucci, F. and D’Andrea, A., 2013. Built Heritage metadata schemas and the integration of architectural data-sets 
using CIDOC-CRM , Built Heritage 2013, proceedings 
http://www.bh2013.polimi.it/papers/bh2013_paper_318.pdf 
Yiakoupi, K., Hermon, S., 2013. Israel Case Studies: The room of Last Supper and The Tomb 50 
of King David Hall, Presentation, 
Digital Heritage 2013, Marseilles, France, Session: “Exploring the 3D ICONS project: from capture to delivery”. 
51
44 
Appendix 2: Project Partners 
Archeotransfert, France 
Athena Research and Innovation Centre in 
Information Communication  Knowledge 
Technologies (CETI), Greece 
Centre National de la Recherche 
Scientique (CNRS-MAP) France 
CMC Associates Ltd., UK Consiglio Nazionale delle Ricerche 
(CNR-ISTI), Italy 
Consorzio Interdipartimentale 
Servizi Archeologici (CISA), Italy 
The Cyprus Research and Educational 
Foundation (CYI-STARC), Cyprus 
Visual Dimension bvba (VisDim) Belgium 
52
44 
The Discovery Programme Ltd., Ireland Koninklijke Musea Voor Kunst en 
Geschiedenis (KMKG), Belgium 
Muzeul Naional de Istorie a 
României (MNIR), Romania 
National Technical University of 
Athens (NTUA), Greece 
Politecnico di Milano (POLIMI), 
Italy 
Universidad de Jaen, Andalusian Centre for 
Iberian Archaeology (UJA-CAAI), Spain 
Fondazione Bruno Kessler (FBK), Italy 
53
44

More Related Content

3D-ICONS Guidelines

  • 2. 44 GUIDELINES First published in 2014 by 3D-ICONS ©3D-ICONS Design and layout by: Ian McCarthy Printed in Ireland by: Paceprint, Shaws Lane, Sandymount, Dublin 4, Ireland 3D-ICONS is a project funded under the European Commission’s ICT Policy Support Programme, project no. 297194. The views and opinions expressed in this presentation are the sole responsibility of the authors and do not necessarily reflect the views of the European Commission.
  • 3. 44 CONTENTS Introduction 06 Guidelines 08 3D Data Capture Techniques 10 Short range techniques 11 Long & mid range techniques 13 Multi Scale Image Based Methods 14 Post Processing of 3D Content 18 Post-Process A - Geometric reconstruction 18 Post-Process B - Model structuring 23 Post Process C - Visual enrichment of 3D models 24 Post Process D - Hypothetical reconstruction 25 Creating complementary 2D media (derived from the 3D model) 26 3D Publishing Methodology 28 Online publication technologies 29 IPR Considerations 35
  • 4. 44 Metadata 36 CARARE 2.0 Metadata Schema 37 Resources for CARARE Metadata Creation 40 Relating Metadata to Europeana 43 LICENSING & IPR Considerations 44 IPR & the 3D pipeline 45 Access Agreement 46 CREATIVE COMMONS 48 Appendix 1: Additional 3D-ICONS RESOURCES 50 Appendix 2: Project Partners 52
  • 5. Introduction Public fascination with the architectural and archaeological heritage is well known, it is proven to be one of the main reasons for tourism according to the UN World Tourism Organisation. Historic buildings and archaeological monuments form a signicant component Europe’s cultural heritage; they are the physical testimonies of European history and of the dierent events that led to the creation of the European landscape, as we know it today. 44 The documentation of built heritage increasingly avails of 3D scanning and other remote sensing technologies, which produces digital replicas in an accurate and fast way. Such digital models have a large range of uses, from the conservation and preservation of monuments to the communication of their cultural value to the public. They may also support in-depth analysis of their architectural and artistic features as well as allow the production of interpretive reconstructions of their past appearance. The goal of the 3D-ICONS project, funded under the European Commission’s ICT Policy Support Programme which builds on the results of CARARE (www.carare.eu) and 3D-COFORM (www.3d-coform.eu), is to provide Europeana with 3D models of architectural and archaeological monuments of remarkable cultural importance. The project brings together 16 partners (see appendix 2) from across Europe (11 countries) with relevant expertise in 3D modelling and digitization. The main purpose of this project is to produce around 4000 accurate 3D models which have to be processed into a simplied form in order to be visualized on low end personal computers and on the web. The structure of this publication has been created with two distinct sections: Guidelines: Documentation of the digitisation, modelling and online access pipeline for the creation o f online 3d models of cultural heritage objects. Case Studies: 28 examples of 3D content creation by the 3D-ICONS partners across a range of monuments, architectural features and artefacts. 06
  • 6. 4407 IMAGE OF 3D CAPTURED DATA E.G. POINT CLOUD, MESH model of CHRYSIPPUS HEAD Greyscale radiance scaling shaded version of the Church of the Holy Apostles 3D model THE CENACLE COMPLEX - Xray filter view re-coloured, generated by meshlab
  • 7. Guidelines 44 The 3D-ICONS project exploits existing tools and methodologies and integrates them in a complete supply chain of 3D digitization to contribute a signicant mass of 3D content to Europeana. These guidelines aim to document this complete pipeline which covers all technical and logistic aspects to create 3D models of cultural heritage objects with no established digitization. Each section of these guidelines corresponds to one of the ve interlinked stages of the 3D-ICONS pipeline: 1. 3D Data Capture Techniques 2. Post Processing of 3D Content 3. 3D Publishing Methodology 4. Metadata 5. Licensing IPR Considerations When reading the guidelines it is important to understand that each stage in the processing pipeline is interrelated, and therefore one should look at the pipeline as a holistic approach to the challenge of capturing and presenting 3D models of cultural heritage models. Data capture, post processing and 3D publishing activities normally occur sequentially after each other. The direction of these activities is not only towards the nal online 3D model. In carrying out your own 3D heritage eorts, one should also consider the nal potential publishing methodology, and travel back up the supply chain to identify what are the most appropriate capture and modelling techniques to provide this online 3D solution. The processes involved with the creation of metadata and the selection of appropriate data licensing should be integrated at all stages of the pipeline. 08
  • 8. 44 Capture MODELLING ONLINE DELIVERY METADATA LICENSING These guidelines are a product of the eort of all project partners’ and are the synthesis of several project publications (see appendix 1) which can be consulted for in-depth documentation of the dierent components of the pipeline. The guidelines do not represent an exhaustive list of all the potential processing paths but provide, describe and explore the solutions provided by the 3D-ICONS project. 09
  • 9. 44 3D Data Capture Techniques Capture MODELLING ONLINE DELIVERY METADATA In recent years the development of technologies and techniques for the surface data capture of three-dimensional artefacts and monuments has allowed both geometrical and structural information to be documented. Several approaches have been developed, each of which addresses dierent circumstances and records dierent characteristics of the 3D artefact or monument. ACTIVE METHODS 3D Data CAPTURE PASSIVE METHODS IMAGE BASED METHODS LASER SCANNING STRUCTURED LIGHT RANGE SENSING TIME OF FLIGHT PHASE SHIFT At present there is a wide range of 3D acquisition technologies, which can be generally classied into contact and non-contact systems. Contact systems are not popular in the Cultural Heritage (CH) domain as they require physical contact with potentially fragile artefacts and structures. In contrast, non-contact systems have been used over the last decade in many CH digitisation projects with success. Non-contact systems are divided into active (those which emit their own electromagnetic energy for surface detection) and passive (those which utilise ambient electromagnetic energy for surface detection). Taxonomy of 3D data capture techniques LICENSING 10
  • 10. 44 Active range-sensing instruments work without contact with the artefact and hence full the requirement that recording devices will not potentially damage the artefact. In addition, their luminous intensity is limited to relatively small values and thus does not cause material damage (e.g. by bleaching pigments). These two properties make them particularly adapted for the applications in CH, where non-invasive and non-destructive analyses are crucial for the protection of heritage. The capabilities of the dierent technologies vary in terms of several criteria which must be considered and balanced when formulating appropriate campaign strategies. These include: • Resolution – the minimum quantitative distance between two consecutive measurements. • Accuracy - what is the maximum level of recorded accuracy? • Range – how close or far away can the device record and object? • Sampling rate – the minimum time between two consecutive measurements? • Cost – what is the expense of the equipment and software to purchase or lease? • Operational environmental conditions – in what conditions will this method work, i.e. is a dark working environment required? • Skill requirements – is extensive training required to carry out the data capture technique? • Use – what the 3D data will be used for, i.e. scientic analysis or visualisation? • Material – from what substance is the artefact/monument fabricated? There are signicant variations between the capabilities of dierent approaches. For example, triangulation techniques can produce greater accuracy than time-of-ight, but can only be used at relatively short range. Where great accuracy is a requirement, this can normally only be achieved with close access to the heritage object to be digitized ( 1m). If physical access to the artefact is dicult or requires the construction of special scaolding, other constraints need consideration (e.g. using an alternative non-invasive techniques). Alternatively, if physical access is impractical without unacceptable levels of invasive methods, then sensing from a greater distance maybe required utilising direct distance measurement techniques (TOF, Phase Deviation) leading to less accurate results. When selecting the appropriate methodology, consideration must also be given to the length of time available to carry out the data collection process and the relative speed of data capture of each technology. Short Range Techniques Laser Triangulation (LT) One of the most widely used active acquisition methods is Laser Triangulation (LT). The method is based on an instrument that carries a laser source and an optical detector. The laser source emits light in the form of a spot, a line or a pattern on the surface of the object while the optical detector captures the deformations of the light pattern due to the surface’s morphology. The depth is computed by using the triangulation principle. Laser scanners are known for their high accuracy in geometry measurements (50m) and dense sampling (100m). Current LT systems are able to oer perfect match between distance measurements and colour information. The method being used proposes the combination of three laser beams (each with a wavelength close to one of the three primary colours) into an optical bre. The acquisition system is able to capture both geometry and colour using the same composite laser beam while being unaected by ambient lighting and shadows. 11
  • 11. 44 Camera Collecting lens Projection lens Laser Source Object BasEline Structured Light (SL) Structured Light (SL) - also known as fringe projection systems - is another popular active acquisition method that is based on projecting a sequence of dierent alternated dark and bright stripes onto the surface of an object and extracting the 3D geometry by monitoring the deformations of each pattern. By examining the edges of each line in the pattern, the distance from the scanner to the object’s surface is calculated by trigonometric triangulation. Signicant research has been carried out on the projection of fringe patterns that are suitable for maximizing the measurement resolution. Current research is focused on developing SL systems that are able to capture 3D surfaces in real-time. This is achieved by increasing the speed of projection patterns and capturing algorithms. SHAPED OBJECT LIGHT STRIPE OBJECT PIXEL MATRIX CAMERA TRIANGULATION BASE STRIPE NUMBER STRIPE PROJECTOR CAMERA PIXEL Diagram illustrating the principles of laser triangulation (LT) based range devices Diagram illustrating the principles of structured light (SL) measurement devices 12
  • 12. 44 -3.00 Intensity Inside Instrument STOP START n s -2.00 -1.00 1.00 2.00 3.00 OVERVIEW Lens Outside Instrument Distance DISTANCE Light Emitted Light Returned History of Emitted Light Phase Shift 0.00 Measurement Object “ns stop watch” transmitter detector receiver S Long Mid Range Techniques Time of Flight (TOF) Time-Of-Flight (TOF) - also known as terrestrial laser scanning - is an active method commonly used for the 3D digitisation of architectural heritage (e.g. an urban area of cultural importance, a monument, an excavation, etc). The method relies on a laser rangender which is used to detect the distance of a surface by timing the round-trip time of a light pulse. By rotating the laser and sensor (usually via a mirror), the scanner can scan up to a full 360 degrees around itself. The accuracy of such systems is related to the precision of its timer. For longer distances (modern systems allow the measurement of ranges up to 6km), TOF systems provide excellent results. An alternative approach to TOF scanning is Phase-Shift (PS), also an active acquisition method, used in closer range distance measurements systems. Again they are based on the round trip of the laser pulse but instead of timing the trip they measure the wavelength phase dierence between the outgoing and return laser pulse to provide a more precise measurement. Diagram illustrating the principles of time of flight (TOF) measurement devices Diagram illustrating the principles of phase shift (PS) measurement devices 13
  • 13. 44 A B Ia Ib Pa Image overlap Model Pb Multi Scale Image based Methods Traditional Photogrammetry Image-based methods can be considered as the passive version of SL. In principle, image-based methods involve stereo calibration, feature extraction, feature correspondence analysis and depth computation based on corresponding points. It is a simple and low cost (in terms of equipment) approach, but it involves the challenging task of correctly identifying common points between images. Photogrammetry is the primary image-based method that is used to determine the 2D and 3D geometric properties of the objects that are visible in an image set. The determination of the attitude, the position and the intrinsic geometric characteristics of the camera is known as the fundamental photogrammetric problem. It can be described as the determination of camera interior and exterior orientation parameters, as well as the determination of the 3D coordinates of points on the images. Photogrammetry can be divided into two categories. These are the aerial and the terrestrial photogrammetry. In aerial photogrammetry, images are acquired via overhead shots from an aircraft or an UAV, whilst in terrestrial photogrammetry images are captured from locations near or on the surface of the earth. Additionally, when the object size and the distance between the camera and object are less than 100m then terrestrial photogrammetry is also dened as close range photogrammetry. The accuracy of photogrammetric measurements is largely a function of the camera’s optics quality and sensor resolution. Current commercial and open photogrammetric software solutions are able to quickly perform tasks such as camera calibration, epipolar geometry computations and textured map 3D mesh generation. Common digital images can be used and under suitable conditions high accuracy measurements can be obtained. The method can be considered objective and reliable. Using modern software solutions it can be relatively simple to apply and has a low cost. When combined with accurate measurements derived from a total station for example it can produce models of high accuracy for scales of 1:100 and even higher. Overlapping area of images captured at A and B are resolved within the 3D model space to enable the precise and accurate measurement of the model 14
  • 14. L1 L2 44 Airbase f f o1 b1 b2 t1 t2 o2 02 The basic principle of stereo photogrammetry. The building appears in two images, taken at L and L2 respectively. The top of the buildiing is represented by the points a1 and a2 and the base by b1 and b2 01 T B Semi Automated Image Based Methods In recent times, the increase in the computation power has allowed the introduction of semi automated image-based methods. Such an example is the combination of Structure-From-Motion (SFM) and Dense Multi-View 3D Reconstruction (DMVR) methods. They can be considered as the current extension of image-based methods. Over the last few years a number of software solutions implementing the SFM-DMVR algorithms from unordered image collections have been made available to the broad public. More specically SFM is considered an extension of stereo vision, where instead of image pairs the method attempts to reconstruct depth from a number of unordered images that depict a static scene or an object from arbitrary viewpoints. Apart from the feature extraction phase, the trajectories of corresponding features over the image collection are also computed. The method mainly uses the corresponding features, which are shared between dierent images that depict overlapping areas, to calculate the intrinsic and extrinsic parameters of the camera. These parameters are related to the focal length, the image format, the principal point, the lens distortion coecients, the location of the projection centre and the image orientation in 3D space. Many systems involve the bundle adjustment method in order to improve the accuracy of calculating the camera trajectory within the image collection, minimise the projection error and prevent the error-built up of the camera position tracking. 15
  • 15. 44 q11 q1i P Camera 1 Q1 Qj q2j Camera 2 1 q21 P 2 qi1 qij Camera i Pi Diagram illustrating the principles of structure from motion (SFM) measurement from multiple overlapping images Example of SFM methodology illustration the orientation and number of overlapping images utilised in the modeling of a building (CETI) 16
  • 16. 44 The resulting 3D point cloud data sets derived using SFM (CETI) Software Comments Web-service where the user uploads an image collection and the system returns a dense 3D reconstruction of the scene. The resulting 3D reconstruction is created using cloud computing technology and can be parsed by Meshlab Service is a part of a set of tools that are freely oered by the company and aim towards the ecient creation and publishing of 3D content on the Web. Their service can be accessed by a dedicated 3D data viewing-processing software tool that recently has been made available for the iOS mobile platform Web-based 3D reconstruction from images service. The user can upload the images through the Website’s interface without the need of downloading any standalone software application Reconstructs the content of an image collection as a 3D dense point cloud but it requires the positioning of specic photogrammetric targets around the scene in order to calibrate the camera Open source solution to create 3D models from photographs. The software doesn’t provide a DMVR option, but allows the user to manually create low complexity 3D meshes that can be textured automatically (image back-projection) by the software SFM-DMVR software solution can merge the independent depth maps of all images and then produce a single vertex painted point cloud that can be converted to a triangulated 3D mesh of dierent densities Software is able to create 3D digital elevation models from image collections captured by UAVs. The software is being oered as a standalone application or as a Web-service Automatic Reconstruction Conduit ARC 3D www.arc3d.be 123D Catch (Autodesk) www.123dapp.com/catch Hypr3D (Viztu Technologies) www.hypr3d.com PhotoModeler Scanner (Eos Systems) www.photomodeler.com Insight3D insight3d.sourceforge.net PhotoScan (Agisoft) www.agisoft.ru Pix4D pix4d.com 17 There are many instances of SFM and DMVR software which are summarised in the table above
  • 17. 44 Post Processing of 3D Capture MODELLING ONLINE DELIVERY METADATA LICENSING 3D post-processing is a complex procedure consisting of a sequence of processing steps that result in the direct improvement of acquired 3D data (by laser scanning, photogrammetry), and its transformation into visually enriched (and in some cases semantically structured) geometric representations. Post-processing also allows the creation of multiple 3D models starting from the same gathered data according to the desired application, level of detail and other additional criteria. The results of the post-processing phase are 3D geometric representations accompanied by complementary 2D media, which are the digital assets ready to be converted (or embedded) into the nal web publishing formats. Post-Process A - Geometric reconstruction Geometric reconstruction is the essential processing step for the elaboration of a 3D representation of an artefact or monument following the capture of 3D digitisation. This can be achieved using several relevant techniques which must be chosen based upon: •5IFNPSQIPMPHJDBMDPNQMFYJUZPGUIFPCKFDU GSPNHFPNFUSJDQSJNJUJWFTUPPSHBOJDTIBQFT •5IFTDBMFPGUIFPCKFDU •8IBUUIFöOBMNPEFMXJMMCFVTFEGPS SBOHJOHGSPNNFUSJDBOBMZTJTUPQVCMJDEJTTFNJOBUJPO 18
  • 18. 44 Automatic meshing from a dense 3D point cloud The simple criteria for choosing and evaluating a relevant 3D geometric reconstruction technique is the degree of consistency of the 3D model compared to the real object. These guidelines are primarily concerned with the creation of 3D models from digitised data therefore this processing method will focus upon the automated meshing of 3D data from point-cloud data. However, additional methods are available for the 3D reconstruction, including (in order of level of approximation to reality): •*OUFSBDUJWFPSTFNJBVUPNBUJDSFDPOTUSVDUJPOCBTFEPOSFMFWBOUQSPöMFT •*OUFSBDUJWFPSTFNJBVUPNBUJDSFDPOTUSVDUJPOCBTFEPOQSJNJUJWFTBEKVTUNFOU •*OUFSBDUJWFSFDPOTUSVDUJPOCBTFEPOUFDIOJDBMJDPOPHSBQIZ QMBOT DSPTTTFDUJPOTBOEFMFWBUJPOT •*OUFSBDUJWFSFDPOTUSVDUJPOCBTFEPOBSUJTUJDJDPOPHSBQIZ TLFUDIFT QBJOUJOHT FUD Point cloud data Once an artefact and monuments has been digitised the initial results (raw data) can be represented by a series of three dimen-sional data points in a coordinate system commonly called a point cloud. The processing of point clouds involves cleaning and the alignment phases. The cleaning phase involves the removal of all the non-desired data. Non-desired data would include the poorly captured surface areas (e.g. high deviation between laser beam and surface’s normal), the areas that belong to other objects (e.g. survey apparatus, people), the outlying points and any other badly captured areas. Another common characteristic of the raw data is noise. Noise can be described as the random spatial displacement of vertices around the actual surface that is being digitised. Compared to active scanning techniques such as laser scanning, image based techniques suer more from noise artefacts. Noise ltering is in an essential step that requires cautious application as it eects the ne morphological details been described by the data. Image of intenity shaded point cloud model of Cahergal stone fort (DiscoveRy Programme) 19
  • 19. 44 Processing mesh data The next stage in the processing pipeline is the production of a surfaced or “wrapped” 3D model. The transformation of point cloud data into a surface of triangular meshes is the procedure of grouping triplets of point cloud vertices to compose a triangle. The representation of a point cloud as a triangular mesh does not eliminate the noise being carried by the data. Nevertheless, the noise ltering of a triangular mesh is more ecient in terms of algorithm development due to the known surface topology and the surface normal vectors of the neighbouring triangles. Several processes must be completed to produce a topologically correct 3D mesh model. Image of point cloud data set and subsequent derived mesh model (DiscoveRy Programme) Mesh Cleaning Incomplete or problematic data from digitising an object in three dimensions is another common situation. Discontinuities (e.g. holes) in the data are introduced in each partial scan due to occlusions, accessibility limitation or even challenging surface properties. The procedure of lling holes is handled in two steps. The rst step is to identify the areas that contain missing data. For small regions, this can be achieved automatically using currently available 3D data processing software solutions. However, for larger areas signicant user interaction is necessary for their accurate identication. Once the identication is completed, the reconstruction of the missing data areas can be performed by using algorithms that take into consideration the curvature trends of the holes boundaries. Filling holes of complex surfaces in not a trivial task and can only be based on assumptions about the topology of the missing data. Additional problems identied in a mesh may include spikes, unreferenced vertices, and non-manifold edges, and these should also be removed during the cleaning stage. Meshing software (such as Meshlab or Geomagic Studio) has several routines to assist in the cleaning of problem areas of meshes. Illustration of the identification and closing of holes within the 3D mesh model (DiscoveRy Programme) 20
  • 20. 44 Mesh Simplification The mesh simplication, also known as decimation, is one of the most common approaches in reducing the amount of data needed to describe the complete surface of an object. In most cases the data produced by the 3D acquisition system includes vast amounts of superuous points. As a result, the size of the raw data is often prohibitive for interactive visualisation applications, and hardware requirements are beyond the standard computer system of the average user. Mesh simplication methods reduce the amount of data required to describe the surface of an object while retaining the geometrical quality of the 3D model within the specications of a given application. A popular method for signicantly reducing the number of vertices of a triangulated mesh, while maintaining the overall appearance of the object, is the quadric edge collapse decimation. This method merges the common vertices from adjacent triangles that lie on at surfaces, aiming to reduce the polygons number without sacricing signicant details from the object. Most simplication methods can signicantly improve the 3D mesh eciency in terms of data size. Illustration of high resolution polygon mesh model and simplified low polygon mesh model (DiscoveRy Programme) Mesh retopologisation Extreme simplication of complex meshes, such as for use in computer games and simulations, usually cannot be done automatically. Important features are dissolved and in extreme conditions even topology is compromised. Decimating a mesh at an extreme level can be achieved by an empirical technique called retopology. This is a 3D modelling technique, where special tools are used by the operator to generate a simpler version of the original dense model, by utilising the original topology as a supportive underlying layer. This technique keeps the number of polygons at an extreme minimum, while at the same time allow the user to select which topological features should be preserved from the original geometry. Retopology Image illustrating a low polygon mesh before (left ) and after retopologisation (left) modelling can also take advantage of parametric surfaces, like NURBS, in order to create models of innite delity while requiring minimum resources in terms of memory and processing power. Some of the commonly available software that can be used to perform the retopology technique include: 3D Coat, Mudbox, Blender, ZBrush, GSculpt, Meshlab Retopology Tool ver 1.2. Mesh retopologisation can be a time consuming process, however, it produces better quality light weight topology than automatic decimation. It also facilitates the creation of humanly recognizable texture maps. 21
  • 21. 44 TEXTURE MAPPING Modern rendering technologies, both interactive and non-interactive, allow the topological enhancement of low complexity geometry with special 2D relief maps, that can carry high frequency information about detailed topological features such as bumps, cracks and glyphs. Keeping this type of morphological features in the actual 3D mesh data requires a huge amount of additional polygons. However, expressing this kind of information as a 2D map and applying it while rendering the geometry can be by far more ecient. This can be achieved by taking advantage of modern graphics cards hardware and at the same time keeping resource requirements at a minimum. Displacement maps are generated using specialised 3D data processing software, e.g. the open source software xNormal. The software compares the distance from each texel on the surface of the simplied mesh against the surface of the original mesh and creates a 2D bitmap-based displacement map. Diagram illustrating the different texture maps which can be employed to enhance the display of a lightweight 3D model. From top: UV map, normal map, image map and ambient occlusion map (DISCOVERY PROGRAMME) 22
  • 22. 44 Post-Process B - Model structuring Depending on the scale and on the morphological complexity, a geometric 3D reconstruction of an artefact, architectural detail or an archaeological site generally leads to the representation of a single (and complex) geometric mesh or a collection of geometric entities organized according to several criteria. The model structuring strategy is usually carried out with the aim of harmonizing the hierarchical relations, which can express the architectural composition of a building (e.g. relations between entities and layouts) and can also be used as a guideline for structuring the related metadata. In some cases, it may be important to identify a domain expert to ensure the consistency of the chosen segmentation (e.g. temporal components) and nomenclature (e.g. specialized vocabulary) is coherent with archaeological and architectural theories. Examples of geometric reconstruction techniques (CNRS-MAP) According to the technique used and to the general purpose of the 3D representation, the results of a geometric reconstruction can be structured in four ways: 1. Single unstructured entity (e.g. dense point clouds, or detailed mesh) 2. Decomposed in elementary entities (e.g. 3D models composed by few parts) 3. Decomposed in elementary entities hierarchically organized (e.g. 3D models decomposed in several parts for expressing the architectural layouts) 4. Decomposed in entities organized in classes (e.g. 3D models decomposed in several parts for expressing the classication of materials, temporal states, etc.) According to the chosen model structuring strategy, the nal dataset structure (including geometry and visual enrichment) can be composed in several ways. Example of 3D model structuring (CNRS-MAP) : on the left, according to temporal states; on the right, according to a morphological segmentation (architectural units) (CNRS-MAP) 23
  • 23. 44 3D Geometry •4JOHMFTUSVDUVSFE%öMFXJUIPOFMFWFMof detail •.VMUJQMFJOEFQFOEFOU%öMFTXJUIPOFMFWFMPGEFUBJM •.VMUJQMFJOEFQFOEFOU%öMFTXJUINVMUJQMFMFWFMPGEFUBJM Textures •NCFEEFEJOUPUIF%HFPNFUSZöMFFHQFSWFSUFYDPMPVS •4UPSFEBTFYUFSOBM%öMFTFH67NBQT Post Process C - Visual enrichment of 3D models Several computer graphics techniques can be utilised for the visual enhancement of the 3D models produced from the geometric reconstruction processes. These guidelines focus on those techniques which provide a 3D simulation consistent with the visual and geometric characteristics of artefacts and monuments (reality capture) and other techniques, mainly used for the dissemination of 3D cultural data. The visual enrichment techniques described below are ordered from those that ensure a strong geometric consistency with the real object to techniques that introduce increasing approximations: •5FYUVSFFYUSBDUJPOBOEQSPKFDUJPOTUBSUJOHGSPNQIPUPHSBQITnely oriented on the 3D model (e.g. image-based modelling, photogrammetry) •5FYUVSJOHCZQIPUPHSBQIJDTBNQMFTPGUIFSFBMNBUFSJBMTPGthe artefact •5FYUVSJOHCZHFOFSJDTIBEFST •*OUFSBDUJWFPSTFNJBVUPNBUJDSFDPOTUSVDUJPOCBTFEPOrelevant proles •*OUFSBDUJWFPSTFNJBVUPNBUJDSFDPOTUSVDUJPOCBTFEPOprimitives adjustment •*OUFSBDUJWFSFDPOTUSVDUJPOCBTFEPOUFDIOJDBMJDPOPHSBQIZ(plans, cross-sections and elevations) •*OUFSBDUJWFSFDPOTUSVDUJPOCBTFEPOBSUJTUJDJDPOPHSBQIZ(sketches, paintings, etc.) Example of visual enrichment based on the projection of textures starting from photographs finely oriented on to a primitives 3D model (left) and the projection of panoramic imagery on organic 3D meshes (right) (CNRS-MAP / Discovery Programme) 24
  • 24. 44 Post Process D - Hypothetical reconstruction The hypothetical reconstruction of an architectural object or archaeological site to a previous state is a process primarily related to eld of historical studies. Nevertheless, some specic technical and methodological issues with 3D graphical representation of missing (or partially destroyed) heritage buildings are often integrated in 3D reconstruction approaches. While primarily related to the analysis of historical images and knowledge, the methodological approaches for the creation of hypothetical reconstruc-tions can be based on the integration of 3D metric data of existing parts of the object together with the reconstruction of the object’s shapes starting from graphical representations of the artefact/monument. Depending upon the source material available 3D may be created based upon a combination of the following methods: •UIF%BDRVJTJUJPOPGFYJTUJOH PSFYJTUFE QBSUT •QSFWJPVT%TVSWFZTPGFYJTUJOH PSFYJTUFE QBSUT •OPONFUSJDJDPOPHSBQIJDTPVSDFTPGUIFTUVEJFEBSUFGBDU •JDPOPHSBQIJDTPVSDFT NFUSJDBOEPSOPONFUSJD SFMBUFEUPTJNJMBSBSUFGBDUT In addition where reconstructions are created the following recommendations should be taken into account: •*EFOUJGZUIFTDJFOUJöDBEWJTPS T XIPDBOHVJEFBOEWBMJEBUFUIF%NPEFMEVSJOHJUTSFDPOTUSVDUJPO •%PDVNFOUJOGPSNBUJPOBCPVUBEEJUJPOBMTPVSDFT JNBHFSZBOECJCMJPHSBQIJDBMSFGFSFODFT VUJMJTFEJOthe elaboration of the 3D model •$MFBSMZJOEJDBUFBOETBWFJOGPSNBUJPOJOEJDBUJOHUIFdegree of uncertainty e.g. information gaps within the 3D model. Example of 3D hypothetical reconstruction of a past state (CNRS-MAP) 25
  • 25. 44 Creating complementary 2D media (derived from the 3D model) During the creation of 3D models of artefacts complementary 2D media can also be produced. This 2D media can be pro-duced in dierent ways, depending on the type of 3D source (point cloud, geometric model, visually enriched 3D model), as well as on the nal visualization type (static, dynamic, interactive). This additional content can be used to visualise content which cannot be successfully visualised through an interactive 3D web model, e.g. renderings of highly detailed 3D models or visualisation of full point cloud datasets. Static images •%SFOEFSJOHTPGWJTVBMMZFOSJDIFENPEFMTGSPNTFWFSBMQFSTQFDUJWFT •MFWBUJPOT QMBOTBOETFDUJPOTPGQPJOUcloud data •*NBHFTIJHIMJHIUJOHTQFDJöDGFBUVSFTPGthe cultural object Animation •5VSOUBCMFWJEFP •'MZUISPVHIBOJNBUJPOBOEWJEFPUPVST •4USVDUVSBMBOJNBUJPOIJHIMJHIUJOHdierent components of an artefact or monument and their interrelationship •5FNQPSBMBOJNBUJPOIJHIMJHIUJOHUIFchronological change of a structure, e.g. animation from present day ruin back to reconstruction model Interactive Images •1BOPSBNBT •730CKFDUT 26
  • 26. 44 POST PROCESSING 3D model CUrrent state IMAGeS - video video COMPLEMENTARY 2D MEDIA 3D model 12th CENTURY 3D model 12th CENTURY 3D model 10th CENTURY 3D model 11th CENTURY 3D model 11th CENTURY IMAGeS - video IMAGeS - video Complementary 2D media derived from the 3D model. Abbey of Saint-Michel de Cuxa (CNRS-MAP) images - detail images - 1 video 27
  • 27. 44 3D Publishing Methodology Capture MODELLING ONLINE DELIVERY METADATA LICENSING This section of the guidelines outlines the dierent methodologies and technical solutions for the optimal delivery and display of rich and complex 3D assets online. When evaluating publication formats the selection needs to consider the wide range in potential users from the general public to the researcher. Online publishing choice should be based upon the following criteria: •TFSWFBXJEFWBSJFUZPGOFFETBOEVTFST •NBYJNJTFUIF%VTFSFYQFSJFODF •GPDVTPOBDDFTTJCJMJUZQSPWJEJOHBOFBTZBOEJOUVJUJWFFYQFSJFODFGPSUIFVOFRVJQQFEVTFS •NBYJNJTFUIFBWBJMBCJMJUZPGUIF%DPOUFOUPOBTNBOZCSPXTFSQMBUGPSNTBTQPTTJCMF (desktop and mobile) •GPDVTPOTJOHMFSFMFBTFPGNPEFMTXIJDIDBOPQFSBUFPOBTNBOZQMBUGPSNTBOEPQFSBUJOHTZTUFNT to facilitate ecient sustainable production •BWPJEVTFSTIBWJOHUPJOTUBMMBEEJUJPOBMTPGUXBSFPSQMVHJOT •TVQQPSUUIFDPODFQUPGSFTPVSDFFYQMPSBUJPO FHJOUFHSBUFE63-T Creators of 3D content will also need to consider if the online 3D models require le format conversion and optimisation procedures to enable their use online, to ensure a responsive and pleasant user experience. It is important to evaluate which is the most optimal approach, taking into account the potential eort required for le format conversions and optimisation procedures. 28
  • 28. 44 Online publication technologies A range of suitable solutions exist for the creation and publication of online 3D content, each with their benets, limitations and applicability to cultural heritage. 3D model Objects Complex Buildings Sites Complexity Low High Low High Low High 3D PDF HTML5/WebGL X3D Unity3D/UnReal Pseudo-3D 3D PDF Yes Yes Optimised model Point cloud Optimised model Nexus/ point cloud Optimised model Optimised model Yes Yes Yes Yes Optimised model Optimised model Point cloud Optimised model Optimised model Yes Yes Yes Special cases (glass etc) The 3D PDF oers the ability to integrate 3D models and annotations within a PDF document. The 3D PDF format natively supports the Universal 3D (U3D) and Product Representation Compact (PRC) 3D le formats. The 3D PDF format was previously recommended within two EU projects: CARARE Linked Heritage Project. Advantages include: •1SFEFöOFEWJFXTDBOCFFNCFEEFEGPSUIFVTFS FHJOTJEFEJòFSFOUSPPNTPGBCVJMEJOH •.BKPSJUZPGVTFSTBMSFBEZIBWFB1%'WJFXFSTVDIBTDSPCBU3FBEFSJOTUBMMFEPOUIFJSDPNQVUFS •.PEFMTBSFSFMBUJWFMZFBTZUPVTF •.PEFMTBSFTFMGDPOUBJOFEBMMPXTUIFVTFPGBTJOHMF6OJGPSN3FTPVSDF*EFOUJöFS 63* JOPSEFSUPEFöOFB complete 3D model •5IFVTFSJTQSPWJEFEXJUIMJNJUFEUPPMTUPNFBTVSFBOETFDUJPO •4VJUBCMFGPSEFTLUPQCSPXTJOHEJSFDUWJTVBMJTBUJPOPG%1%'POEFTLUPQDPNQVUFST •5FYUVSFTDBOVTFUIFKQFHDPNQSFTTJPOGFBUVSFPG%1%'UPSFEVDFöMFTJ[FT •EEJUJPOBMNFEJBDPOUFOUTVDIBTUFYU WJEFP JNBHFTDBOCFFNCFEEFEXJUIJOUIF1%' Disadvantages include: • When opening a 3D PDF documents through a browser, which is often the case with hyperlinked documents, dierent display behaviours occur, depending on the browser as 3D PDF not supported in web browser itself due to security issue •1%'NVTUCFWJFXFEXJUIJODSPCBU3FBEFSOPUPCWJPVTGPSOPOUFDIOJDBMVTFS •.PEFMTBSFOPUOPSNBMMZIJHIMZPQUJNJ[FEGPSPOMJOFVTFSFTVMUJOHJOMPOHEPXOMPBEUJNFTBOEJOBCJMJUZUPXPSLPO slower machines •5IFVTFPG%1%'PONPCJMFEFWJDFTSFRVJSFTUIFVTFPGBOQQXJUIMJNJUFEGVODUJPOBMJUZ 29
  • 29. 44 The main authoring platform is Acrobat Pro, which, in combination with the 3D PDF Converter plug-in (only on Windows) and additional software allows importing 3D models in a large number of le formats, and additional media. 3D PDF les can be created in Acrobat Pro without the Tetra4D Converter plug-in if one is capable of translating the 3D models into U3D le format (for example through MeshLab), this workow is available on both Mac and Windows. HTML5/WebGL Solutions With the advent of HTML5 and its associated WebGL JavaScript API the interactive rendering of 3D visualisation can be achieved in a web browser without installing additional software or plugins by using the canvas element of HTML5. WebGL was utilised within the 3D-COFORM project as the 3D PDF model of a stone high-relief depicting a hunter with a hare which is accompanied by a mastiff (Universidad de Jaén) method of choice for online 3D delivery. Most new HTML5/WebGL solutions use a cloud solution, in which the 3D models reside on servers of the company providing the visualisation software, but the nal model can be embedded on a normal HTML web page using canvas and iframes. Advantages include: •4VQQPSUFEBVUPNBUJDBMMZCZNBOZ)5.-EFTLUPQbrowsers (Chrome, Firefox, Opera, Internet Explorer), however, Safari browsers requires users to enable it •*ODSFBTJOHNPCJMFTVQQPSU #MBDLCFSSZTNPCJMFbrowser fully supporting WebGL content and partial support on the Android Chrome browser) •MMPXTEJSFDUBDDFTTUPUIFHSBQIJDTQSPDFTTJOHVOJU(GPU) on the hardware display card present in the computer •T8FC(-VUJMJTFT)5.-UIFNJOJNVNrequirements of creating a WebGL application is a text editor and a web browser •$BOCFVTFEGPSUIFWJTVBMJTBUJPOPGQPJOUDMPVEEBUB Disadvantages include: •J04EPFTOPUDVSSFOUMZTVQQPSU8FC(-CVUGSPNiOS 8 this will be implemented and is currently being beta tested •4FDVSJUZDPODFSOTFYJTUBT8FC(-VUJMJTFTUIF(16BOEcan give a malicious program the ability to force the host computer system to execute harmful code •*UJTOPUTVQQPSUFECZPMEFSHSBQIJDTDBSET •5IFSFJTDVSSFOUMZBMBDLPGEFWFMPQNFOUenvironments 30
  • 30. 44 3D Model Type Software Comments Object/artefact Scene/building Point cloud •0OMJOFTUPSBHF.C 'SFF (C QBJE •7JTVBMJTBUJPOTDBOVTFBMQIB CVNQ HMPTTZ 0 HMPX EFUBJMBOE TQIFSJDBMSFøFDUJPONBQTJOQBJEWFSTJPO •0OMZPCKGPSNBUTVQQPSUFE •7JFXQPSUTIBEJOHPQUJPOBWBJMBCMF •0OMZQBJETFSWJDF •4VQQPSUGPSMBSHFNFTIFTXIJDIFYDFFE8FC(-USJBOHMFDPVOU •'MBTIBMUFSOBUJWFGPSOPO8FC(-CSPXTFST •0QUJPOPGWJFXFSQQGPSNPCJMFEFWJDFT •%TUSFBNJOHDBQBCJMJUZGPSNVMUJSFTPMVUJPONPEFMT •/FYVTGPSNBUBCJMJUZUPDPNQSFTTJPOBOETUSFBNJOH%DPOUFOUUIBU SFöOFTHSBEVBMMZEFQFOEJOHVQPOUIF[PPNMFWFMPGUIFVTFS •-JNJUFEUPDPMPVSQFSWFSUFYEBUB •6TFSIBTUIFBCJMJUZUPEZOBNJDBMMZBEKVTUUIFMJHIUJOHQPTJUJPO •óDJFOUQSFTFOUBUJPOPGETDBOEBUBBTTJNQMJöDBUJPOPQUJNJTBUJPO QSPDFTTJOHOPSNBMMZSFRVJSFEUPWJFXEFUBJMFENPEFMTJTOPUSFRVJSFE •0òFSTBVUPNBUFEQTFVEP%TPMVUJPOTGPSCSPXTFSTXIFSF8FC(-JT OPUBWBJMBCMF •'SFFBOEBCJMJUZUPJNQPSUEJòFSFOUöMFGPSNBUT •7JTVBMJTBUJPOTDBOVTFTUBOEBSEEJòVTF TQFDVMBSBOETIJOJOFTT QBSBNFUFST BOEMJHIUNBQT •OOPUBUJPOPGPCKFDUTBWBJMBCMF •4VJUBCMFGPSDPNQMFYTDFOFTSFRVJSJOHXBMLUISPVHIBOEHVJEFEUPVST •$PNQSFIFOTJWFCVJMUJOGFBUVSFTGPSUPJOUFSBDUXJUICVJMEJOHPSTJUF DPMMJTJPOEFUFDUJPO XBMLJOHPOTVSGBDF •-JHIUNBQQJOHBOEQBSUJDMFFòFDUTTVQQPSUFE •$POUFOUBWBJMBCMFUISPVHINPCJMFBQQT •$IBSBDUFSBOJNBUJPOBWBJMBCMF •4VQQPSUTJNQPSUPGöMFGPSNBUT •4VQQPSUTJNNFSTJWFFOWJSPONFOUXJUI0DDVMVT3JGU8JOEPXTBOE .BDBWBJMBCJMJUZ •CJMJUZUPMPBEJODSFNFOUBMMZ-0%QPJOUDMPEEBUB •6TFSIBTBCJMJUZUPBEKVTU-0% BOEQPJOUTJ[FXJUIJOWJFXFS QPJOUDMPVEEBUBTFUNVTUCFQSPDFTTFEVTJOHBGSFFDPOWFSTJPOUPPM •0QFOTPVSDFTPMVUVJPO P3D Big Object Base (BOB) Publish 3DHOP SketchFab CopperCube 3D Potree 31 A range of applications exist for WebGL-based 3D typically storing the 3D data in the Cloud based servers and providing visualisation of the 3D content.
  • 31. 44 3D Viewer: Potee / TeraPoints This is in a freeight mode viewer. To move your point of view” click drag To move the model: alt + click drag Use the arrow keys to ‘y’ To move faster, move your mouse wheel up, move it down to slow down. Tholos in Delphi, Greece 3D point cloud model viewed on-line using the Potree WebGL viewer (CNRS-MAP) Retopologised light weight model of the Market Cross, Glendalough viewed in the SketchFab online WebGL viewer (Discovery Programme) Capital in Nexus on-line viewer format from the Cefalu cloister in Sicily, Italy (ISTI-CNR) 32
  • 32. 44 X3D X3D is the technological successor and extension to VRML which is recognised by the International Organisation for Standardization (ISO). Currently X3D provides native authoring and use of declarative XML-based X3D scenes which can be viewed within a HTML5 web browser, and provides Extensible Markup Language (XML) capabilities within 3D to integrate with other WWW technologies. Advantages include: •QSPWJEFTFOIBODFE%WJTVBMJTBUJPODBQBCJMJUJFTNVMUJTUBHFBOENVMUJUFYUVSFSFOEFSJOH MJHIUNBQTIBEFST SFBMUJNF reection, Non-uniform rational basis splines (NURBS) •6UJMJTJOH+BWB4DSJQUMJCSBSZ9%0. 9%TDFOFTCFDPNFQBSUPGUIF)5.-%0. •9%0.DBOCFVTFEJOUPPG8FC(-UIFSFGPSFJUDBOCFSVOEJSFDUMZJOUIFCSPXTFS XJUIPVUBOZQMVHJO •%NPEFMTDBOVUJMJTFUIF(/6H[JQDPNQSFTTJPOUPSFEVDFUIFJSöMFTTJ[F The 3D model of the Metope Sele heraon displayed within an X3D viewer (Fondazione Bruno Kessler) Disadvantages include: •%NPEFMJTDPOTUSVDUFEGSPNNVMUJQMFöMFT therefore les structure is not contained and cannot be referenced via a single URI •$PNQBUJCJMJUZJTTVFTFYJTUBT%NPEFMXJUIUIFTQFDJöD viewer multiple les e.g. texture maps required to construct scene The X3D format provides a wide range of authoring tools for the production of X3D models or with X3D export functions including open source (Blender and Meshlab) and paid solutions (AC3D). Unity - Serious Games Solutions Technology solutions developed for the provision of online gaming activities can be utilised for the visualisation and exploration of cultural heritage objects. Unity is one such game platform which can provide a solution to providing rich 3D environments for users. Advantages include: •EWBODFEWJTVBMJ[BUJPOGFBUVSFTJODMVEJOHSFBMUJNFHMPCBMillumination, reection probes, physically based shading, ability to embed audio, complex animation •$BOCFVUJMJTFEPOBMMNBKPSEFTLUPQQMBUGPSNT 8JOEPXT .BD04 Linux) and all major mobile platforms (Android, iOS, Windows Phone, Blackberry) •$PMMJTJPOEFUFDUJPO UIFOPUJPOPGHSPVOETVSGBDFTBOEJOUFSBDUJWFPCKFDUTFHEPPST •1SPWJEFTVTFSTXJUIFYQMPSBUPSZBOEOPOMJOFBS%TQBDF •(BNJOHFOWJSPONFOUJTFBTZUPVTFGPSOPOUFDIOJDBMVTFST 33
  • 33. 44 •7FSZTVJUBCMFGPSUIFQSPWJTJPOPGJOUFSBDUJWFNPEFMTPGIFSJUBHFTQBDFT FHCVJMEJOHTBOEBSDIBFPMPHJDBMTJUFT •8FCQVCMJTIJOHJTGSFF •-BSHFDPNNVOJUZPGVTFSQSPWJEJOHBEEJUJPOBMUPPMTBOEQMVHJO Disadvantages include: •$VSSFOUMZ6OJUZWSFRVJSFTBplug-in to be installed on the user’s machine, however, from the release of Unity v.5 online publishing within HTML5 capabilities will be available •$PTUPGTPGUXBSFSFRVJSFEUPauthor 3D scenes if Pro functions required Other game engine platforms adopted for serious gaming such as the Unreal Development Kit (UDK) are available; however, most require the installation of an additional plug-in. Unity3D test on the 3D virtual reconstruction of the Ename abbey in 1300 (VisDim) Pseudo3D (ObjectVR) solutions Pseudo3D provides the user with a near to 3D experience by allowing the user to navigate interactively through a series of images taken at dierent orientations which mimics real 3D visualisation. Psuedo3D can provide solutions to view 360 panoramas or to provide an orbital view of an object (ObjectVR). Pseudo3D solution is a valuable tool for online display where: •$PNQMFY%NPEFMTDBOOPUCFSFOEFSFEPOMJOFJOSFBMUJNFFHMBSHFQPJOUDMPVENPEFMT •%EJHJUJTBUJPOPGBOPCKFDUJTOPUQPTTJCMF CVUQIPUPHSBQIZXJUIUIFSFTVMUJOHJNBHFTDPNCJOFEBMMPXUIFVTFSB pseudo3D experience •*UQSPWJEFTBTPMVUJPOGPSVTFSTUIBUIBWFIBSEXBSFXJUIWFSZMJNJUFEHSBQIJDBMDBQBCJMJUJFT •1BSUJDVMBSMZTVJUBCMFGPSUIFEJTQMBZPGDPNQMFY%BSUFGBDUNPEFMT 34
  • 34. 44 Several software solutions are available to construct Object VR visualisations (Flashicator, BoxshotVR, Object2VR, Krpano) all which can produce content via HTML5 (use of QuicktimeVR requires a plugin and is therefore not suggested). Many of these tools also oer the user the ability zoom into the object and closely inspect the models if high resolution images are used to create the ObjectVR. However, one limitation to this solution is its ability to conne the user to visualise the object through a predened paths. Two images from an ObjectVR visualisation of the abbey of Ename in 1665 (by VisDim) Remote Rendering Interactive remote rendering uses the combination of an interactive low resolution 3D model (visualised through WebGL) with rendering the corresponding high resolution 3D model on a remote server and sending just the rendered image to replace the low resolution WebGL visualisation. An example of this application is the Venus 3D model publishing system (CCRM Lab). Advantages include: •UIF%NPEFMEPFTOPUOFFEUPCFUSBOTGFSSFEPWFSUIFJOUFSOFUBOESFTJEFPOUIFVTFSTDPNQVUFS POMZUIFKQFH image is transferred •VTFSIBTUIFBCJMJUZUPEZOBNJDBMMZBMUFSMJHIUJOHQPTJUJPOBOEJOTQFDUEFUBJMFE%NPEFMT Disadvantages to this method include: •DFOUSBMJTFEIPTUJOHPGUIFTFSWJDFXPVMEJODVSPOHPJOHDPTUT •UIFMBHUJNFUIFFYQFSJFODFEXIJMTUXBJUJOHGPSUIFSFOEFSUPPDDVSDBOCFRVJUFPòQVUUJOHGPSUIFVTFSBOEJT dependent upon the user’s internet speed IPR Considerations for online publishing An additional consideration for online publication is the IPR implications of the 3D models. Although the ability to potentially “steal” 3D models visualisation should not be considered as a major threat, several factors should be considered depending upon the publication method including: •%öMFTXIJDISFTJEFPOBTFSWFSBOEDBOCFEPXOMPBEFEGPSWJTVBMJTBUJPOFH%1%'DPVMEQPUFOUJBMMZCFSFVTFBOE altered by the user. Password protection is available to encrypt the data, although there is the potential to bypass this and extract the 3D model •8IFSF%NPEFMTBSFPòFSFEUPUIFVTFSUISPVHIBOFNCFEEFE)5.-TFSWJDFBOEUIF%EBUBJTIPTUFECZBUIJSE party (e.g. SketchFab) care must be taken to inspect their rights on the uploaded data 35
  • 35. 44 METADATA Capture MODELLING ONLINE DELIVERY METADATA LICENSING Running in parallel to the 3D capture, modelling and publication activities, the creation of metadata is essential to the success of the processing pipeline. The metadata created within the pipeline provides key information and context data to ve key areas: 1. It describes in detail the artefact or monument which is being modelled in 3D and its provenance 2. It describes in detail the digital representation of the artefact or monument and its online location 3. It provides technical information and quality insurance on the processes and methods utilised in the digitisation and modelling of heritage objects 4. It provides information on the access, licensing and reuse of the created 3D models and any associated digital content 5. It enables the search, discovery and reuse of content through the mapping of metadata to aggregators e.g. Europeana Data Model (EDM) 36
  • 36. 44 CARARE 2.0 Metadata Schema To construct a comprehensive metadata record for digital content created through the pipeline, which adheres to the ve key areas described above, the CARARE 2.0 Metadata scheme was selected. The CARARE metadata schema was developed during this EU co-funded three-year project which addressed the issue to make digital content, including information about archaeological monuments, artefacts, architecture, landscapes, available to Europeana’s users. The CARARE schema works like an intermediate schema between existing European standards and the EDM by: •FOTVSJOHJOUFSPQFSBCJMJUZCFUXFFOUIFOBUJWFNFUBEBUBIFMECZIFSJUBHFPSHBOJTBUJPOTBOEVSPQFBOB •DSFBUJOHBNFUBEBUBTDIFNBBCMFUPNBQUIFFYJTUJOHPSJHJOBMNFUBEBUBJOUPBDPNNPOPVUQVUTDIFNB •TVQQPSUJOHUIFGVMMSBOHFPGEFTDSJQUJWFJOGPSNBUJPOBCPVUNPOVNFOUT CVJMEJOHT MBOETDBQFBSFBTBOE their representations The CARARE schema is focussed on a heritage asset and its relations to digital resources, activities and to collection information. The fundamental elements within its structure are: CARARE Wrap - the CARARE start element. It wraps the Heritage Asset with the other information resources Heritage asset Identication (HA) – the descriptive information and metadata about the monument, historic building or artefact. The ability to create relations between heritage asset records allows the relationships between individual monuments that form parts of a larger complex to be expressed Digital resource (DR) – these are digital objects (3D models, images, videos) which are representations of the heritage asset and are provided to the services such as Europeana for reuse Activity (A) - these are events that the heritage asset has taken part in, in this case this is used to record the data capture and 3D modelling activities (paradata) which are utilised to create the 3D content Collection (C) – this is a collection level description of the data being provided to the service environment (Europeana) Graphical example of the relations among the different themes (Heritage Asset, Digital Resources and Activities) of CARARE 2.0 37
  • 37. 44 Object digital assets relationship within CARARE The creation of metadata for cultural heritage objects and their associated digital heritage assets, (3D models, images and videos) should adhere to the following approach to capture the relationship between digital replicas and their original monuments or artefacts. is_replica_of HA ACTIVITY ACTIVITY 3d model HIGH Resolution HA ACTIVITY 3d Model LOW Resolution is_derivative_of Diagram illustrating approach to metadata creation for multiple derivatives from a single cultural heritage object Paradata A specic form of metadata which is recommended within the 3D documentation process is the paradata. Paradata is information and data which describes the process by which the 3D data was collected, processed and modelled and can act as a quality control audit for the data. Examples of paradata include: •UZQFPG%EBUBDPMMFDUJPOUFDIOJRVF JNBHFCBTFEPSSBOHFCBTFE •UZQFPGFRVJQNFOU NPEFMPGUIFDBNFSB MFOTFTVTFE USJBOHVMBUJPOPS50'MBTFSTDBOOFS FUD •XIJDITPGUXBSFBQQMJDBUJPOTXIFSFVTFEUPQSPDFTTUIFEBUB 38 THE PARTNER HAS ONE or multiple 3D digital models as replicas of one physical object = the physical object = discovery, restoration, change in ownership = image_is_shown_at (landingPage of the physical object HA = 3d model of the physical object HIGH RESOLUTION = 3d model of the physical object LOW RESOLUTION =3d model of the physical object Virtual Reconstruction 3d Hypothetical Model ACTIVITY DR DR DR DR HA is_derivative_of
  • 38. 44 The recording of paradata can be achieved both automatically and systematically during the survey process. Where possible, paradata data created by capture devices, e.g. exif information from cameras should be utilised. For all additional paradata information the use of standardised paradata recording sheets should be utilised to ensure systematic recording of techniques, equipment and processes. An example paradata recording sheet created as part of the 3D-ICONS project is available online for reuse at the project website. In terms of inclusion of the paradata within the overall metadata schema, all paradata created can be mapped into the Activity component of the CARARE metadata schema. Standardised vocabulary Where possible standardised vocabularies and their associated persistent uniform resource identier (URI) should be utilised within the metadata to develop and promote the use of semantic tools enabling interoperability, integration and the migration of the digital resources in the Linked open data Format Standardised vocabulary. CARARE Theme Relevant resources CARARE Theme Actor Concepts Spatial Data FOAF (http://www.foaf-project.org/) DBpedia (http://dbpedia.org/About) Gemet Thesaurus (http://www.eionet.europa.eu/gemet) Getty Thesaurus (http://www.getty.edu/research/tools/vocabularies/aat/) HEREIN Thesaurus (http://thesaurus.european-heritage.net/herein/thesaurus/) ICCD/Cultura Italia Portal (http://www.culturaitalia.it/) Linked Data Vocabularies for CH (http://www.heritagedata.org/) GeoNames (http://www.geonames.org/) Getty Thesaurus of Geographic Names (http://www.getty.edu/research/tools/vocabularies/tgn/) Ancient Place names - Pleiades (http://pleiades.stoa.org/) Table summarising available recognised ontologies thesauri which can be used in metadata creation for cultural heritage objects 39
  • 39. 44 Resources for Metadata Creation The actual process of metadata creation can be achieved using two dierent application paths: Illustration of the different strategies in the implementation of metadata creation Strategy 1: Metadata Creation Tool For those institutions and organisation which have no previous descriptive data relating to their collections to map, or have little experience in the production of XML metadata records the creation of metadata can be achieved utilising the online 3D-ICONS Metadata creation tool. Available online (http://orpheus.ceti.gr/3d_icons), the tool provides the user with the ability to dene separate building blocks of the CARARE metadata data schema: • Organization – The organisation(s) with the responsibility for the 3D digital object assets • Collection – A description of the overall 3D collection made available • Actor – The person/people who have carried out the data collection and processing tasks, e.g. geo-surveyor • Activity – Descriptive detail of the digitisation and modelling activities utilised, e.g. terrestrial laser scanning • Spatial data – Geographical location of the cultural heritage object • Temporal data – chronological period or date associated with the cultural heritage object • Digital Resources – Description of the digital representation le, e.g. jpeg image 40 MAPPING MINT2 MORe2 METADATA EDITOR INGESTION - PUBLICATION CREATION METADATA FROM SCRATCH ADDING MISSING FIELDS STRATEGIES 1 2 3 DB LEGACY DATA
  • 40. 44 Defining a Heritage Asset within the metadata creation tool View of associated digital assets within the metadata creation tool 41
  • 41. 44 Strategy 2: MINT2 Mapping Tool For those organisations which have their metadata already created and contained within their own formalised cataloguing management software, e.g. museum collections databases, this can be reused to form the main component of the CARARE metadata record. To achieve this, the MINT 2 metadata services tool can be employed. MINT 2 services comprise of a web based platform that is designed and developed to facilitate aggregation initiatives for cultural heritage content and metadata in Europe. The platform oers an organisation a management system that allows the operation of dierent aggregation schemes (thematic or cross-domain, international, national or regional) and corresponding access rights. Registered organizations can upload (http, ftp, OAI-PMH) their metadata records in xml or csv format in order to manage, aggregate and publish their collections. The CARARE metadata model serves as the aggregation schema to which the ingested data is mapped. Users can dene their metadata crosswalks from their own schema to CARARE with the help of a visual mappings editor utilising a simple drag-and-drop function which creates the mappings. The MINT tool supports string manipulation functions for input elements in order to perform 1-n and m- mappings between the two models. Additionally, structural element mappings are allowed, as well as constant or controlled value (target schema enumerations) assignment, conditional mappings (with a complex condition editor) and value mappings between input and target value lists. Mappings can be applied to ingested records, edited, downloaded and shared as templates between users of the platform. Screen shot of the mapping procedure within MINT 2 Once mapped the MINT tool preview interface enable the user to visualise the steps of the aggregation including the current input xml record, the XSLT of their mappings, the transformed record in the target schema, subsequent transformations from the target schema to other models of interest (e.g. Europeana’s metadata schema), and available html renderings of each xml record. 42
  • 42. 44 Visualization of the mapped record metadata record in MINT 2 Relating Metadata to Europeana – MoRe 2.0 Once the metadata record packages have been created by either the online metadata tool or the MINT 2 service these are transformed into the EDM, and delivered to Europeana using the Monument Repository (MoRe2) services. The MoRe 2 repository aggregator tool also enables ingested metadata records to be validated against specic quality control criteria, e.g. correct spatial coordinates are utilised for the spatial location. The MoRe 2 system also provides users with summary statistics of their metadata records including the number of Heritage Assets ingested and the number and type of digital media objects referenced, e.g. images, 3D models. Once validated and ingested metadata data records can then be easily published to Europeana with the click of a button. Screen capture of the MORE 2.0 tool displaying ingested metadata packages 43
  • 43. 44 LICENSING IPR Considerations Capture MODELLING ONLINE DELIVERY METADATA LICENSING In order for the eective sharing and reuse of 3D content of heritage objects a common framework is required to establish best practice in the management and licensing of 3D models and any associated digital objects (video, metadata images). Understandably many institutions have the concern that providing access to 3D content could potentially erode their commercial rights to the data. The standardised IPR scheme presented: •*EFOUJöFTUIFLFZEBUBBOESFMBUJPOTIJQTXIJDISFRVJSFNBOBHFNFOU •1SPWJEFTSPCVTUMJDFODFTUPSFUBJODPNNFSDJBMSJHIUTUPUIFEBUBXIJMTUFOBCMJOHSFVTFGPS educational and research activities •$PMMBUFTTVJUBCMFNFUBEBUBXJUIBOBQQSPQSJBUF $SFBUJWF$PNNPOT$$ MJDFOTJOHTUSVDUVSFGPS submission to Europeana •YBNJOFTUIFLFZDPQZSJHIUDIBMMFOHFTGBDFECZBMMQBSUJFTJOWPMWFEJOUIFQSPDFTTPGDBQUVSJOH processing, developing and presenting digital content 44
  • 44. 44 IPR the 3D pipeline The creative processes and activities involved in this 3D pipeline results in the generation of Intellectual Property Rights (IPR) at many junctions. The development of a suitable IPR model is relevant at all stages of the pipeline, from the earlier phases which are dominated by controlled access rights, to the later phases where substantial eort is invested in the modelling of captured 3D data to produce rich and eective 3D heritage content. This is important in terms of recognising that while the content providers may control access, it is the later processes that have the highest costs and greatest IPR. Illustration of the Object activity chain identifying the range of people and organisations involved in creating 3D content for cultural heritage The IPR scheme proposed here is integrated into all the activities of the3D modelling pipeline from initial data capture to the delivery of 3D heritage content online. Within the pipeline several key actors and organisations are dened: • Monument/artefact Manager – organisation who are the custodians or owners of the heritage object, e.g. museum • Imaging Partner – company or institution which carries out the primary 3D data capture of the heritage object • 3D Development Partner - company or institution which executes the 3D data modelling of the heritage object for delivery online • Distribution Partner – organisation which hosts 3D content for public use • Commercialisation Partner – company which wishes to establish a potential revenue path for 3D data Within the processing pipeline there are several milestones where IPR agreements need to be applied. 45
  • 45. 44 Access Agreement At the start of the pipeline , where Imaging Partners capture 3D data of a monument or artefact in the ownership or management of a third party (e.g. National Heritage organisation) it is good practice to establish an Access agreement. This agreement outlines both the arrangements in place to physically access the site/museum to capture the data, and what level of control each party has over the initial survey data captured. Depending upon who is funding the work two standard agreements are possible: Full or co-funding for capture provided by Imaging Partner - non-exclusive licenses for both parties to make use of the primary data with the IPR resting with the Imaging Partners Full funding provided by Heritage Organisation - assignation of the IPR by the Imaging Partners to the heritage body It is also important to clearly state the IPR on any subsequent 3D content derived from the original captured data as these are new and distinct data sets and often require signicant amounts of eort to produce the nal deliverable 3D model. Derivatives Agreements Depending upon the attitude of the Imaging Partner to data sharing, the original 3D capture data (e.g. high quality point cloud data) will not normally be publicly accessible. However when new products are derived by a third party a Business-2-Business (B2B) derivative agreement will be required. For organisations where the data capture and 3D modelling is carried out within the same institution no additional derivative agreement is required. Metadata Agreements Where metadata and paradata is provided by 3D content creators to third parties such as Europeana for the purpose of increasing the visibility and reuse of the 3D models a Creative Commons (CC0) License is usually adopted. The metadata agreement will not interfere with any subsequent commercialisation of content by the rights holder. Public Use Agreements The 3D models and other associated derived products such as videos and images will normally be made widely available to the public using a more restrictive arrangement than the metadata agreement to retain control over potential commercial and inappropriate future reuse. This will be dependent upon the policy of the 3D content creator organisation and can range from the restrictive (paid access - no reuse) to the liberal (CC0) but is likely that organisation would like to retain some potential commercial value in their models. It is recommended that organisations at least apply the Creative Commons Attribution-Non-Commercial-No-Derivatives (CC-BY-NC-ND) license to their model which allows for the redistribution and non-commercial reuses, as long as the 3D content is unchanged and credits the creator organisation. The full range of potential rights statements available through European can be found at http://pro.europeana.eu/web/guest/ available-rights-statements. Commercial Agreements Final 3D models, additional content (videos and rendered images) and supplementary data created within the 3D pipeline process have the potential to be commercialised. Licensing models to commercial image libraries or directly to end users can help fund the creation of higher quality models and may well be in the interest of all parties – as once created resources may be used commercially and non-commercially. These agreements are a critical part of stimulating an added value chain so that original survey work can reach its full potential. 46
  • 46. 44 (creates 1st generation content + IPR) 3. Derivative Agreement 2. Metadata Agreement who, what, when, where visualisations made available online (hotels distributable visuals) Visualisation of the different agreements and license structures which can be utilised during the capture, modelling and reuse of 3d cultural heritage modelsin creating 3D content for cultural heritage 47 CONTENT PARTNER IMaging partner EUROPEANA DEVELOPMENT PArtner DISTURBING PArtner SALES PArtner objects and sites provenance archives accreditation 1. Access Agreement 3D data, photography, supporting materials 3D data, photography, texture, maps, digital merchandise, physical merchandise (generates additional IPR and creates 2nd generation content) 4. Public Use Agreement Fullment, distribution (establishes revenue paths for materials) 5. Commercial Agreement (portal and search engine) (access to assets and original IPR)
  • 47. 44 Creative Commons Key Facts Founded in 2001 and thanks to the proliferation of the internet and web sites like Wikipedia, Creative Commons has become one of the most recognised licensing structures available. As this also forms the IP structure for Europeana. Enables the sharing and use of creativity and knowledge through free, public, and standardized infrastructures and tools that creates a balance between the reality of the Internet and the reality of copyright laws. Creative Commons licenses require licensees to get permission to do any of the things with a work that the law reserves exclusively to a licensor and that the license does not expressly allow. Creative Commons Licensees must credit the licensor, keep copyright notices intact on all copies of the work, and link to the license from copies of the work. CC Licenses are available from a fully open license where users can copy, modify, distribute and perform the work, even for commercial purposes, all without asking permission (C00) to the restrictive CC BY-NC-ND where others can download your works and share them with others as long as they credit you, but they can’t change them in any way or use them commercially. 48
  • 48. 44 Increased reuse restriction Public Domain - CC0 Attribution - CC BY 3.0 Attribution-ShareAlike - CC BY-SA 3.0 Attribution-NoDerivs - CC BY-ND Attribution-NonCommercial - CC BY-NC Attribution-NonCommercial-ShareAlike - CC BY-NC-SA Attribution-NonCommercial-NoDerivs - CC BY-NC-ND 49
  • 49. Appendix 1: Additional 3D-ICONS Resources Project Reports D2.1 Digitisation Planning Report, Paolo Cignioni (CNR) and Andrea d’Andrea (CISA) D2.3 Case Studies for the Testing the Digitisation Process, Anestis Koutsoudis, Blaz Vidmar and Fotis Arnaoutoglou (CETI) and Fabio Remondino (FBK) D3.1 Interim Report on Data Acquisition, Gabriele Guidi (POLIMI) D3.2 Final Report on Data Acquisition, Gabriele Guidi (POLIMI) D4.1 Interim Report on Post-processing, Livio de Luca (CNRS-MAP) D4.2 Interim Report on Metadata Creation , A. D’Andrea (CISA) with the collaboration of R. Fattovich and F. Pesando (CISA), A. Tsaouselis and A. Koutsoudis (CETI) D4.3 FinalReport on Post-processing, Livio de Luca (CNRS-MAP) D5.1 3D Publication Formats Suitable for Europeana , Daniel Pletinckx and Dries Nollet (VisDim) D5.2 Report on publication, Daniel Pletinckx and Dries Nollet (VisDim) D6.1 Report on Metadata and Thesauri Andrea d’Andrea (CISA) and Kate Fernie (MDR) D6.2 Report on Harvesting and Supply, Andrea d’Andrea (CISA) and Kate Fernie (B2C) D7.1 Preliminary Report on IPR Scheme, Mike Spearman, Sharyn Emslie (CMC) D7.2 IPR Scheme, Mike Spearman, Sharyn Emslie and Paul O’Sullivan (CMC) D7.4 Report on Business Models, Mike Spearman, James Hemsley, Emma Inglis, Sharyn Emslie and Paul O’Sullivan (CMC) All Project reports are available at fro the 3D-ICONS website at the following URL: http://3dicons-project.eu/index.php/eng/Resources Publications D’Andrea, A., Niccolucci, F. and Fernie K., 2012. 3D-ICONS: European project providing 3D models and related digital content to Europeana, EVA Florence 2012. D’Andrea, A., Niccolucci, F., Bassett, and Fernie, K., 2012. 3D-ICONS: World Heritage Sites for Europeana: Making Complex 3D Models Available to Everyone, VSMM 2012. D’Andrea, A., Niccolucci, F. and Fernie K., 2013. CARARE 2.0: a metadata schema for 3D Cultural Objects. Digital Heritage 2013, International Congress, IEEE Proceedings. D’Andrea, A., Niccolucci, F. and Fernie K., 2013. 3D ICONS metadata schema for 3D objects, Newsletter di Archeologia CISA, Volume 4, pp. 159-181, Callieri, M., Leoni, C., Dellepiane, M. and Scopigno, R., 2013. Artworks narrating a story: a modular framework for the integrated presentation of three-dimensional and textual contents, ACM WEB3D - 18th International Conference on 3D Web Technology, page 167-175 pdf: http://vcg.isti.cnr.it/Publications/2013/CLDS13/web3D_cross.pdf 50
  • 50. 4 Dell’Unto, N., Ferdani, D., Leander, A., Dellepiane, M. and Lindgren, S., 2013. Digital reconstruction and visualization in archaeology Case-study drawn from the work of the Swedish Pompeii Project, Digital Heritage 2013 International Conference, page 621-628 pdf: http://vcg.isti.cnr.it/Publications/2013/DFLDCL13/digitalheritage2013_Pompeii.pdf Gonizzi Barsanti, S. and Guidi, G., 2013. 3D digitization of museum content within the 3D-ICONS project, ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., II-5/W1, pp. 151-156. Online: www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/II-5-W1/151/2013/ Gonizzi Barsanti, S., Micoli, L.L., Guidi, G., 2013. 3D Digitizing a whole museum: a metadata centered workow, 2013 Digital Heritage International Congress (DigitalHeritage), Vol. 2, pp. 307-310, IEEE, ISBN 978-1-4799-3169-9. Guidi, G., Rodríguez Navarro, P., Gonizzi Barsanti, S., Loredana Micoli, L., Russo, M., 2013. Quick textured mesh generation in Cultural Heritage digitization, Built Heritage 2013, Milan, Italy, pp. 877-882, [Selected for printed publication]. Guidi, G., Rodríguez Navarro, P., Gonizzi Barsanti, S., Loredana Micoli, L., Russo, M., 2013. Quick textured mesh generation in Cultural Heritage digitization, Built Heritage 2013, Milan, Italy, pp. 877-882, [Selected for printed publication]. Online: http://www.bh2013.polimi.it/papers/bh2013_paper_324.pdf Hermon, S., Bakirtzis, N., Kyriacou, P., 2013. 3D Documentation – Analysis - Interpretation, Digital libraries of 3D data – access and inter-operability, and The cycle of use and re-use of digital heritage assets., Scientic Support for Growth Jobs (2013): Cultural and Creative Industries, Brussels, Belgium., Session: posters and presentation. Hermon, S., Ben-Ami, D., Khalaily, H., Avni, G., Iannone, G., Faka, M., 2013. 3D documentation of large-scale, complex archaeological sites: The Givati Parking excavation in Jerusalem, Conference Proceedings, Digital Heritage 2013, Marseilles, France, vol 2, Session: Documentia. Digital Documentation of Archaeological Heritage, pp. 581 Hermon, S., Niccolucci, F.,Yiakoupi, K., Kolosova, A., Iannone, G., Faka, M., Kyriacou, P., Niccolucci, V., 2013. Documenting Architectonic Heritage in Conict Areas. The case of Agia Marina Church, Derynia, Cyprus, Conference Proceedings, Built Heritage 2013, Monitoring Conservation Management, Milan, Italy, 20 November, pp. 800 - 808. Available:http://www.bh2013.polimi.it/papers/bh2013_paper_216.pdf [20 Dec 2013]. Hermon, S., Khalaily, H., Avni, G., Reem, A., Iannone, G., Fakka, M., 2013. Digitizing the Holy – 3D Documentation and analysis of the architectural history of the “Room of the Last Supper” – the Cenacle in Jerusalem, Conference Proceedings, Digital Heritage 2013, Marseilles, France, vol 2, Session 3−Architecture, Landscape: Documentation Visualization, pp. 359 - 362. Jiménez Fernández-Palacios, B., Remondino, F., Lombardo, J., Stefani, C. and L. De Luca, 2013. Web visualization of complex reality-based 3D models with Nubes, Digital Heritage 2013 Int. Congress, IEEE Proceedings. Leoni, C., Callieri, M., Dellepiane, M. Rosselli Del Turco, R. and O’Donnell, D., 2013. The Dream and the Cross: bringing 3D content in a digital edition, Digital Heritage 2013 International Conference, page 281-288 - October 2013 pdf:http://vcg.isti.cnr.it/Publications/2013/LCDRO13/DreamAndTheCross.pdf Niccolucci, F., Felicetti, A., Amico, N. and D’Andrea, A., 2013. Quality control in the production of 3D documentation of monuments, Built Heritage 2013, proceedings http://www.bh2013.polimi.it/papers/bh2013_paper_314.pdf Remondino, F., Menna, F., Koutsoudis, A., Chamzas, C. and El-Hakim, S., 2013. Design and implement a reality-based 3D digitisation and modelling project”. Digital Heritage 2013 Int. Congress, IEEE Proceedings. Ronzino, P., Niccolucci, F. and D’Andrea, A., 2013. Built Heritage metadata schemas and the integration of architectural data-sets using CIDOC-CRM , Built Heritage 2013, proceedings http://www.bh2013.polimi.it/papers/bh2013_paper_318.pdf Yiakoupi, K., Hermon, S., 2013. Israel Case Studies: The room of Last Supper and The Tomb 50 of King David Hall, Presentation, Digital Heritage 2013, Marseilles, France, Session: “Exploring the 3D ICONS project: from capture to delivery”. 51
  • 51. 44 Appendix 2: Project Partners Archeotransfert, France Athena Research and Innovation Centre in Information Communication Knowledge Technologies (CETI), Greece Centre National de la Recherche Scientique (CNRS-MAP) France CMC Associates Ltd., UK Consiglio Nazionale delle Ricerche (CNR-ISTI), Italy Consorzio Interdipartimentale Servizi Archeologici (CISA), Italy The Cyprus Research and Educational Foundation (CYI-STARC), Cyprus Visual Dimension bvba (VisDim) Belgium 52
  • 52. 44 The Discovery Programme Ltd., Ireland Koninklijke Musea Voor Kunst en Geschiedenis (KMKG), Belgium Muzeul Naional de Istorie a României (MNIR), Romania National Technical University of Athens (NTUA), Greece Politecnico di Milano (POLIMI), Italy Universidad de Jaen, Andalusian Centre for Iberian Archaeology (UJA-CAAI), Spain Fondazione Bruno Kessler (FBK), Italy 53
  • 53. 44