Visual Information Processing and Protection Group
Overhead images are characterized by repetitive patterns (e.g., similar ground textures, house roofs, cars). These may introduce false positives when searching for copy-move forgeries (i.e., duplicated regions). On the other hand, a detector may be able to exploit these repetitive scenarios to derive a series of constraints to distinguish which region is original and which is the copy in case of duplication. Another approach that can be used to disambiguate the source and target image portions in a copy-move forgery considers the interpolation artefacts associated to the geometric transformations that often accompany copy-move forgeries. The non-perfect invertibility of the geometric transformation, in fact, makes it possible to predict the target region from the source one but not the other way round.
A distinguishing feature of many satellite images is their spectral resolution. In fact, the number of spectral bands of satellite images may range from 5-7 bands in multispectral images to hundreds or even thousands of bands in hyperspectral images. The correlation between spectral bands represents a unique, distinguishing signature of the sensor which acquired the image and the subsequent processing pipeline and hence can be used to understand if a portion of an image has been spliced from a donor image having a different spectral characterization. The goal of this part of the project task is to design an architecture explicitly thought to extract inter-spectral features and use it to detect spliced regions in multi- and hyper-spectral images.
The PrintOut project offers the development of anti-counterfeiting technologies through the perspective of a senior attacker, using concepts of:
Computer Vision: to develop software that uses a smartphone camera to process artifacts in printed material belonging to counterfeit or genuine products.
Game Theory: to model strategies of attacker and defender, making the product authentication more focused on real-world situations.
Information Theory: used to format metrics and analyze data from counterfeited and genuine 2D barcodes.
Machine Learning: used to study descriptive data and discriminate counterfeited and genuine printed material.
Adversarial Machine Learning: used to model attacks against authentication techniques.
While the appearance of AI-based editing tools is only the last, most dramatic, step towards the final delegitimization of digital media as a trustworthy representation of reality, MultiMedia Forensics (MMF) researchers have started looking at AI as a way to preserve the dependability of digital media. In the last years, several detectors based on Convolutional Neural Networks (CNNs) have been developed to detect whether an image, or a video, has been manipulated, or to gather information about its history.
Despite the promising results achieved so far, the application of AI-based methods for MMF is seriously hindered by a number of shortcomings including:
The goal of PREMIER is to overcome the above shortcomings and develop a new class of AI-based MMF tools, that can be successfully used to preserve the dependability of digital media. To do so, PREMIER will pursue a novel approach whereby AI-techniques are enriched with a model-based signal-processing view point. In this way, the strengths of data-driven techniques will be maintained and any available information stemming from the scenario at hand exploited. In this vein, the need for huge amount of training data will be relaxed by constraining the network structure and the training process so to orient the analysis towards task-relevant features.
The strong connection between certain classes of constrained CNN and signal processing methodologies will be exploited to ease the interpretability of the forensic analysis. The use of signal-processing-oriented structures, in fact, eases the construction of temporal and spatial heat-maps showing which parts of the analyzed media contributed most to the detector’s outcome. A better interpretation of the analysis will also ensure that MMF detectors base their decisions on task-relevant features avoiding so-called confounding factors, which can be easily attacked by a forger. Still regarding security, the joint use of self-learned and handcrafted features will be exploited to improve the resilience of the detectors against deliberate attacks.
The project will mainly focus on video forensics, due to the importance that video has in the formation of opinions and the diffusion of information, and because research in video forensics is much less advanced than for still images, and hence represents a more challenging area wherein the soundness of PREMIER viewpoint can be verified.
Consumer-grade imaging sensors have become ubiquitous in the past decade. Images and videos, collected from such sensors are used by many entitiesfor public and private communications, including publicity, advocacy, disinformation, and deception. The US Department of Defense (DoD) would like to be able to extract knowledge from and understand this imagery and its provenance. Many images and videos are modified and/or manipulated prior to publication/dissemination. The goal of this research is to develop a set of forensics tools to determine the integrity, semantic consistency and evolutionary history of images and videos.We have assembled a team of outstanding technical experts from seven universities, with complementary skills and background in computer vision and biometrics, machine learning, digital forensics, as well as signal processing and information theory. We investigating all areas of media integrity.
This material is based on research sponsored by DARPA and the Air Force Research Laboratory (AFRL) under agreement number FA8750-16-2-0173. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA and the Air Force Research Laboratory (AFRL) or the U.S. Government
CS addresses the problem of collecting a number of measurements smaller than the one required by Shannon‘s theorem, but sufficient to allow the reconstruction of sparse signals with an arbitrarily low error. A promising application of CS theory regards the remote acquisition of hyperspectral imagery for satellite earth observation. This project aims at reducing the computational complexity of the reconstruction algorithm, allowing the application of CS to larger spatial/spectral blocks, and improving its performance. This will result in a better trade-off between the number of detectors and the speed of the light modulator/detector, without sacrificing reconstruction quality. The problem of sensor calibration will be investigated in the context of CS.
The project focuses on the development of new techniques for multi-clue forensics analysis that, starting from the indications provided by a pool of forensics tools thought to detect the presence of specific artifacts, reach a global conclusion about the authenticity of a given image. The to-be-developed techniques will have to operate in highly non-structured scenarios (e.g. to analyze images downloaded form the Internet) characterized by imprecise and incomplete information.
The MAVEN project will focus on the development of a set of tools for multimedia data management and security. More specifically, MAVEN objectives will be centered on two key concepts: "search" and "verify", integrated in a coherent manner; MAVEN will first search for digital contents containing "objects"of interest (e.g. a face appearing on CCTV, or a specific logo appearing in the news). Once those objects are retrieved, MAVEN will apply advanced forensic analysis tools to very their integrity and authenticity.
With the rapid proliferation of inexpensive acquisition and storage devices, multimedia objects can be easily created, stored, transmitted, modified and tampered with by anyone. During its lifetime, a digital object might go through several processing stages, including multiple analog-to-digital (A/D) and digital-to-analog (D/A) conversions, coding and decoding, transmission, editing. The REWIND project is aimed at synergistically combining principles of signal processing, machine learning and information theory to answer relevant questions on the past history of such objects.
The vision inspiring LivingKnowledge is to consider diversity an asset and to make it traceable, understandable and exploitable, with the goal to improve navigation and search in very large multimodal datasets (e.g., the Web itself). LivingKnowledge will study the effect of diversity and time on opinions and bias, a topic with high potential for social and economic exploitation. We envisage a future where search and navigation tools (e.g., search engines) will automatically classify and organize opinions and bias (about, e.g., global warming or the Olympic games in China) and, therefore, will produce more insightful, better organized, easier-to-understand output.
LivingKnowledge employs interdisciplinary competences from, e.g., philosophy of science, cognitive science, library science and semiotics. The proposed solution is based on the foundational notions of context and its ability to localize meaning, and the notion of facet, as from library science, and its ability to organize knowledge as a set of interoperable components (i.e., facets). The project will construct a very large testbed, integrating many years of Web history and value-added knowledge, state-of-the-art search technology and the results of the project. The testbed will be made available for experimentation, dissemination, and exploitation.
The overall goal of the LivingKnowledge project is to bring a new quality into search and knowledge management technology, which makes search results more concise, complete and contextualised. On a provisional basis, we take as referring to the process of compacting knowledge into digestible elements, completeness as meaning the provision of comprehensive knowledge that reflects the inherent diversity of the data, and contextualisation as indicating everything that allows us to understand and interpret this diversity.
We will further explore requirements and evaluate our progress by applying the technology we develop to the LivingKnowledge testbed. The LivingKnowledge testbed will contain a large amount of timed, diverse and biased knowledge constructed in many years of Web evolution (provided by the European Archive), and it will be enabled by state of the art search technology (provided by Yahoo!).
Distributed Source Coding (DSC) addresses the problem of the separate coding of two or more correlated sources of information, while reaching the same performance of a joint coder. Although the possibility of achieving such a surprising result has been proved theoretically about 30 years ago, its practical application to real data has been considered only recently. The aim of this project is to apply the DSC concepts to two remote sensing scenarios. The first scenario considers the compression of hyperspectral images on board of a satellite platform. The bands of the hyperspectral image are seen as a set of correlated sources that should be coded jointly in order to achieve the theoretically achievable performance predicted by Shannon. Due to the necessity of keeping the complexity of the satellite platform as low as possible, it is mandatory that simple compression algorithms are used, hence making the joint coding of image bands difficult to achieve. By applying the DSC paradigm, the compression efficiency of a joint coder can be achieved by a much simpler band-by-band coder. The reduced complexity of the encoder is paid by increasing the complexity of the decoder. However, this is not a problem, since image reconstruction is usually performed at the ground station, for which no particular computational constraint exists. The second scenario regards the compression of an image on board of a satellite, when a similar image already exist in the archive of the ground station. This is typically the case of a series of images framing the same scene acquired at different time instants. In this case the project will investigate the possibility of applying DSC to transmit only the “innovation” of the new image with respect to the images available at the decoder. Significant advantages in terms of efficient exploitation of the downlink channel are expected so to increase the amount of data transmitted from the satellite to the ground stations.
ECRYPT is a 4-year network of excellence funded within the Information Societies Technology (IST) Programme of the European Commission's Sixth Framework Programme (FP6) under contract number IST-2002-507932. It falls under the action line Towards a global dependability and security framework. ECRYPT was launched on February 1st, 2004. Its objective is to intensify the collaboration of European researchers in information security, and more in particular in cryptology and digital watermarking.
The goal of SPEED is to foster the advancement of the marriage between Signal Processing and Cryptographic techniques, both at theoretical and practical level. The objective is the initiation and development of a totally new and unexplored interdisciplinary framework and technologies for signal processing in the encrypted domain (s.p.e.d.). As a result, entirely new solutions will emerge to the problem of security in multimedia communication/consumption, and digital signal manipulations.
Most of technological solutions proposed so far to cope with multimedia security simply tried to apply some cryptographic primitives on top of the signal processing modules. These solutions are based on the assumption that the two communicating parties trust each other, so that the encryption is used only to protect the data against third parties. In many cases, though, this is not the case.
Recently, some pioneering works addressing a few scattered scenarios have pointed out that a possible solution to the above problems could consist in the application of the signal processing modules in the encrypted domain. It is the aim of SPEED to foster the birth of the new s.p.e.d. discipline and to demonstrate its ability to provide solutions to the call for security stemming from some selected application scenarios, including multimedia security and privacy-preserving access to sensitive contents.
SPEED research activity will be carried out both at a theoretical and a practical level, the theoretical part being dedicated to the development of a general framework investigating the fundamental limits and trade-offs of s.p.e.d., and the practical part being devoted to the development of some of the basic s.p.e.d. building blocks and to their application in some selected scenarios. The project will end with the validation of the proposed solutions by means of a demonstrator.
The objective of this activity is to develop a lossy compression algorithm based on DSC (Distributed Source Coding), using DSC to boost the performance of 2D compression schemes ideally to that of 3D schemes, without complicating the encoder. The algorithm shall be tested on real-world hyper-spectral images such as those yielded by the AVIRIS sensor.
The performance of the compression algorithm developed within this project has to be compared with that of state-of-the art technology. In particular, efficient 2D and 3D lossy compression algorithms must be identified and used as relevant benchmarks. Since the proposed activity targets improvements in terms of both complexity and compression efficiency, high-performance techniques and low-complexity techniques have to be considered in selecting a set of benchmark techniques.
The actual use of digital watermarking in real applications is impeded by the weakness of current available algorithms against common signal processing manipulations and intentional attacks to the watermark. In this context no effective solutions exist to cope with very simple manipulations leading to the desynchronization of the watermark embedder and detector. This project aims at filling this gap, first of all by analysing the desynchronization problem at a very general and theoretical level, by exploiting some new information theoretic tools, then by testing the theoretical analysis on synthetic and real data, and finally by developing a new class of watermarking algorithms that are robust against desynchronization.
Processing sensitive information like biometric or biomedical signals in a non-trusted scenario, while ensuring that the privacy of the involved parties is preserved, requires that new tools and solutions are developed. In this project we investigate the possibility of processing signals in the encrypted domain for privacy-aware treatment of sensitive information. By relying on advanced cryptographic primitives like homomorphic encryption, multiparty computation and zero knowledge protocols, we will analyse the possibility of developing secure signal processing primitives like linear transforms, scalar products or FIR filters capable of operating on encrypted data. The developed signal processing primitives will be assembled into a set of basic pattern recognition tools (e.g. neural networks or classifiers) forming the basis for the analysis and interpretation of encrypted signals. At an even higher level, the pattern recognition primitives will be applied to practical scenarios involving the treatment of biometric signals, like face or iris images, or other kinds of sensitive data, like biomedical signals. The requirements stemming from the application level, including those raised by the current privacy regulation, will be considered, so to cast the activity into a practical set up. The architectural and data flow constraints will be considered as well, so to encompass all the levels of the addressed scenario.
The actual use of digital watermarking in real applications is impeded by the weakness of current available algorithms against common signal processing manipulations and intentional attacks to the watermark. In this context no effective solutions exist to cope with very simple manipulations leading to the desynchronization of the watermark embedder and detector. This project aims at filling this gap, first of all by analysing the desynchronization problem at a very general and theoretical level, by exploiting some new information theoretic tools, then by testing the theoretical analysis on synthetic and real data, and finally by developing a new class of watermarking algorithms that are robust against desynchronization.
Centrica is a company founded in 1999. It offers services and products focused on web, imaging and multimedia areas for Public Administration and companies. Moreover, Centrica is focusing more and more as a service and product provider of innovative technologies in digital and Internet imaging. In this scenario, Centrica has developed XLmark®, a proprietary watermarking solution embedded in XLimage® to protect the copyright of images. The aim of this project is to develop a new watermarking technique that improve the robustness of the current algorithm. At the end of the project a new release of XLmark® will be upgraded in all image management products of Centrica.
Other projects funded by Centrica:
Watermarking techniques for protection ad authentication of remote sensing images accessible through public and private data trasmission networks.
Analysis of protection techniques of multimedial data and realization of a watermarking software for images to be connected to the digital-terrestrial television system scheduled for D3T project.
Recent signal processing results show that it is possible to perform signal acquisition employing sampling frequencies well below those dictated by Shannon`s sampling theorem. This new theory, called "compressive sampling", applies to signals that exhibit some correlation, as most natural signals and images, and paves the way for the development of new satellite imaging systems that can provide significantly improved resolution without increasing the number of detectors. This project addresses the design and proof-of-concept of compressive sampling for satellite imaging, with the objective of defining suitable sampling strategies, and assessing the quality of reconstructed images.
AI Website Software