V.I.P.P. Group

Ph.D Thesis

PhD thesis LaTeX templates (including Microsoft Publisher and Adobe Illustrator templates for front and back covers)

A Game-Theoretic Approach for Adversarial Information Fusion in Distributed Sensor Networks (Kassem Kallas)

Every day we share our personal information through digital systems which are constantly exposed to threats. For this reason, security-oriented disciplines of signal processing have received increasing attention in the last decades: multimedia forensics, digital watermarking, biometrics, network monitoring, steganography and steganalysis are just a few examples. Even though each of these fields has its own peculiarities, they all have to deal with a common problem: the presence of one or more adversaries aiming at making the system fail. Adversarial Signal Processing lays the basis of a general theory that takes into account the impact that the presence of an adversary has on the design of effective signal processing tools. By focusing on the application side of Adversarial Signal Processing, namely adversarial information fusion in distributed sensor networks, and adopting a game-theoretic approach, this thesis contributes to the above mission by ad- dressing four issues. First, we address decision fusion in distributed sensor networks by developing a novel soft isolation defense scheme that protects the network from adversaries, specifically, Byzantines. Second, we develop an optimum decision fusion strategy in the presence of Byzantines. In the next step, we propose a technique to reduce the complexity of the optimum fusion by re- lying on a novel nearly-optimum message passing algorithm based on factor graphs. Finally, we introduce a defense mechanism to protect decentralized networks running consensus algorithm against data falsification attacks.

Theoretical Foundations of Adversarial Detection and Applications to Multimedia Forensics (Benedetta Tondi)

Every day we share our personal information with digital systems which are constantly exposed to threats. Security-oriented disciplines of signal processing have then received increasing attention in the last decades: multimedia forensics, digital watermarking, biometrics, network intrusion detection, steganography and steganalysis are just a few examples. Even though each of these fields has its own peculiarities, they all have to deal with a common problem: the presence of adversaries aiming at making the system fail. It is the purpose of Adversarial Signal Processing to lay the basis of a general theory that takes into account the impact of an adversary on the design of effective signal processing tools. By focusing on the most prominent problem of Adversarial Signal Processing, namely binary detection or Hypothesis Testing, we contribute to the above mission with a general theoretical framework for the binary detection problem in the presence of an adversary. We resort to Game Theory and Information Theory concepts to model and study the interplay between the decision function designer, a.k.a. Defender, and the adversary, a.k.a. Attacker. We analyze different scenarios depending on the adversary’s behavior, the decision setup and the players’ knowledge about the statistical characterization of the system. Then, we apply some of the theoretical findings to specific problems in multimedia forensics: the detection of contrast enhancement and multiple JPEG compression.

Digital Progressive Compressed Sensing Algorithms for Remotely Sensed Hyperspectral Images (Siméon Kamdem Kuiteing)

The main advantage of CS is that compression takes place during the sampling phase, making possible signi cant savings in terms of the ADC, data storage memory, down-link bandwidth, and electrical power absorption. In this context, CS can be thought as a natural candidate to optimize the capturing of Hyperspectral Images. The main objective of CS is not to perform compression; rather, CS aims at avoiding altogether the acquisition of a very large number of samples, thereby allowing to design sensors that are more e ffective at acquiring the signal of interest. By realizing the importance of exploiting the correlations in all three dimensions of the hyperspectral datacube, many eff orts have been devoted to the design and development of reconstruction algorithms for hyperspectral imagery, but very few of them have been based on the use of CS principles in order to reduce the amount of data acquired and to lower the energy consumption of on-boards sensors of satellite. This research work has addressed all these aspects developing innovative algorithms that provide solutions to these speci c issues. In particular, we have explored the ways the Compressed Sensing technology could be extended to iterative predictive CS reconstruction algorithms to help increase the efficiency of hyperspectral data collection and storage while fully taking advantage of sparsity structure present in all three dimensions of the HSI and keeping the computational complexity at the recovery stage at a very low level. In a nutshell, this thesis has been centered around efficient iterative reconstruction mechanisms coupled with the CS framework to achieve both of the latter points.

Digital Forensic Techniques for Splicing Detection in Multimedia Contents (Marco Fontani)

Visual and audio contents always played a key role in communications, because of their immediacy and presumed objectivity. This has become even more true in the digital era, and today it is common to have multimedia contents stand as proof of events. Digital contents, however, are also very easy to manipulate, thus calling for analysis methods devoted to uncover their processing history. Multimedia forensics is the science trying to answer questions about the past of a given image, audio or video file, questions like “which was the recording device?", or “is the content authentic?". In particular, authenticity assessment is a crucial task in many contexts, and it usually consists in determining whether the investigated object has been artificially created by splicing together different contents. In this thesis we address the problem of splicing detection in the three main media: image, video and audio. Since a fair amount of image splicing detection tools are available today, we contribute to image forensics by developing a comprehensive decision fusion framework, allowing to intelligently merge the output of different algorithms. On the other hand, authenticity verification of digital videos is a rather unexplored field: we thus contribute by introducing a novel video forensic footprint, called Variation of Prediction Footprint, and we show how it can be used to detect double video encoding as well as removal, insertion and manipulation of frames. Finally, we tackle the problem of fake quality and forgery detection in MP3 compressed audio tracks.

Techniques for Digital Image Forensics and Counter-forensics (Andrea Costanzo)

It’s all so easy with Photoshop. With imaging software so widely available, the manipulation of digital images is not anymore a matter for experts only. While there is little harm besides gossip in retouching an unwanted belly or an incipient baldness, the simplicity of counterfeiting is a serious issue when it is exploited to convey social, political or military messages. It is not surprising, then, that restoring the credibility of digital content has become a task of paramount importance. Digital Image Forensics is a science allowing to gather information on the history of an image in such a way that its veracity can be evaluated, based on the principle that any manipulation leaves more or less subtle traces. We contribute to the image forensics’ mission by addressing three open issues. First, we analyse the history of groups of near-duplicate images to reveal their parent-child relationships, thus opening new scenarios for copyright enforcement, news tracking or clustering. Secondly, we make possible the cooperation of heterogeneous image forensic detectors by fusing their decision scores in such a way to deal with uncertainty, incompatibility and noise typically affecting the analysis. Finally, we study strengths and weaknesses of the forensic algorithms based the Scale Invariant Feature Transform; we devise new attacks bypassing the forensic analysis and we discuss possible counter-measures.

Privacy - Preserving Processing of Biomedical Signals with Application to Remote Healthcare Systems (Riccardo Lazzeretti)

To preserve the privacy of patients and service providers in biomedical signal processing applications, particular attention has been given to the use of secure multiparty computation techniques. This thesis focuses on the development of a privacy preserving automatic diagnosis system whereby a remote server classifies a biomedical signal provided by the patient without getting any information about the signal itself and the final result of the classification. Specifically, we present and compare two methods for the secure classification of electrocardiogram (ECG) signals: the former based on linear branching programs and the latter relying on neural networks. Moreover a protocol that performs a preliminary evaluation of the signal quality is proposed. The thesis deals with all the requirements and difficulties related to working with data that must stay encrypted during all the computation steps. The proposed systems prove that carrying out efficiently complex tasks, like ECG classification and quality evaluation, in the encrypted domain is indeed possible in the semihonest model, paving the way to interesting future applications wherein privacy of signal owners is protected by applying high security standards.

Privacy - Preserving Processing of Biometric Templates by Homomorphic Encryption (Pierluigi Failla)


It is commonly known that there is a trade off between the security of the systems based on biometric solutions and the privacy of the biometric data itself. In particular, the technologies behind practical privacy preserving algorithms and protocols belong to several different disciplines including signal processing, cryptography, information theory, each of which with a long standing tradition of theoretical and practical studies. At the same time, only few is known about their joint use, both at a theoretical and a practical level, the separation-paradigm being by far the most popular approach. The main goal of  this thesis is to provide privacy preserving solutions to handle biometric samples avoiding the leakage of information that is intrinsic in the existing approaches and guaranteeing the privacy of the users.

New techniques for steganography and steganalysis in the pixel domain(Giacomo Cancelli)


The thesis takes into account  steganography and steganalysis in the pixel domain. The contribution of this thesis is threefold . From a steganalysis point of view we introduce a new steganalysis method called Amplitude of Local Extrema (ALE) which outperforms previously proposed pixel domain methods. As a second contribution we introduce a comparative methodology for the comparison of different steganalyzers and we apply it to compare ALE with the state-of-art steganalyzers. The third contribution of the thesis regards steganography, since we introduce a new embedding domain and a corresponding method, called MPSteg-color, which outperforms, in terms of undetectability, classical embedding methods as ±1 embedding.

Characterization and Quality Evaluation of Geometric Distortions in Images with Application to Digital Watermarking (Angela D'Angelo)


The work of this thesis can be seen as a first step towards the characterization and quality evaluation of the class of local geometric distortions.
In the last years the problem of evaluating the perceptual impact of geometric distortions in images has received an increasing attention from the watermarking community, due to the central role that such distortions play in watermarking theory. As a matter of fact, the application of a geometric distortion to a watermarked image causes a de-synchronization between the watermark embedder a detector that in most cases prevents the correct extraction of the watermark. A first step to solve the problems with geometric attacks is the characterization of the class of perceptually admissible distortions, defined as the class of geometric distortions whose effect cannot be perceived, or is judged acceptable, by a human observer. This requires the development of models to treat the distortions from a mathematical point of view. In this context, the first part of the thesis focuses on modeling local geometric transformations from a mathematical point of view.

Watermarking is not the only field where an analysis of geometric distortion in images would be useful. In all the applications dealing with geometric distortions the availability of an objective quality metric capable of dealing with this kind of distortions would be of invaluable help. Thus, in the second part of the thesis, two objective quality metrics for the perceptual evaluation of geometrically distorted images have been introduced.

Title Filter     Display # 
# Article Title Hits