home

LeftOver projects 2012 - 2013

SNE group http://uva.nl/

# title
summary
supervisor contact

students
R

P
1
/
2
1
SN

Virtualization vs. Security Boundaries.

Traditionally, security defenses are built upon a classification of the sensitivity and criticality of data and services.  This leads to a logical layering into zones, with an emphasis on command and control at the point of inter-zone traffic.  The classical "defense in depth" approach applies a series of defensive measures applied to network traffic as it traverses the various layers.

Virtualization erodes the natural edges, and this affects guarding system and network boundaries.  In turn, additional technology is developed to add instruments to virtual infrastructure.  The question that arises is the validity of this approach in terms of fitness for purpose, maintainability, scalability and practical viability.
Jeroen Scheerder <Jeroen.Scheerder=>on2it.net>

R

P

2
SN

DNS security revisited.

The crucial DNS remains a liability today.  In the past, several attempts - and huge government impulses - have been made towards DNSsec adaptation. Success has been far from evident, meriting a closer look.  At this point, there might be actual field data to (dis)prove DNSsec skepticism.  DNSsec support has been mandatory for several TLDs now for an extensive period.  While mandatory, participation has been less than complete.  And of the zones for which DNSsec was deployed, it's an open question whether this initial deployment has been followed by proper maintenance (as is necessary for DNSsec zones).

Specific questions are: What adaptation rate has DNSsec seen amongst (for example) .gov zones?  What is the trend, and the adaptation timeline?  Of the zones offering DNSsec at point in time T, which ones are still valid at point T+n?

Running hypothesis would that DNSsec has been plausibly tried, and has been proven a failure.  Let's see this hypothesis disproved!  Or… else…?
Jeroen Scheerder <Jeroen.Scheerder=>on2it.net>

R

P

5
S

Efficient delivery of tiled streaming content.

HTTP Adaptive Streaming (e.g. MPEG DASH, Apple HLS, Microsoft Smooth Streaming) is responsible for an ever-increasing share of streaming video, replacing traditional streaming methods such as RTP and RTMP. The main characteristic of HTTP Adaptive Streaming is that it is based on the concept of splitting content up in numerous small chunks that are independently decodable.  By sequentially requesting and receiving chunks, a client can recreate the content. An advantage of this mechanism is that it allows a client to seamlessly switch between different encodings (e.g. qualities) of the same content.
The technique known as Tiled Streaming build on this concept by not only splitting up content temporally, but also spatially, allowing for specific areas of a video to be independently encoded and requested. This method allows for the navigation in ultra-high resolution content, while not requiring the entire video to be transmitted.
An open question is how these numerous spatial tiles can be distributed and delivered most efficiently over a network, reducing both unnecessary overhead as well as latency.

Ray van Brandenburg <ray.vanbrandenburg=>tno.nl>


R

P

6
S

Visualization of large data sets.

Riscure is a security testing lab, specialized in side channel analysis (SCA) and fault injection (FI) evalutions. During an SCA or FI project large amounts of data are produced and  manipulated.  An efficient visual representation of this data might enable security analyst to have new insight in the security of the target under evaluation.
First, the student will be asked to investigate (e.g. interviews with Riscure employees) the issues a security analysis is facing  when manipulating these large data sets.
Based on the output of the first stage,  the student   will make a proposal  on a more suitable data visualization algorithm and show how the new method can improve the currently used algorithm. To demonstrate the effectiveness of the proposed algorithm  a proof of concept will be proposed (e.g. using http://d3js.org/) using the data sets provided by Riscure.

Research question:
  • What are the most efficient data visualization techniques which can be used to improved our current SCA and FI  evaluations?
The following deliverables are requested from the student:

  • A clear description of the problem
  • A clear description of the relevant data sets
  • Proof of concept implementation of proposed algorithm
Useful information:

http://www.smashingmagazine.com/2007/08/02/data-visualization-modern-approaches/
http://d3js.org/
http://www.ted.com/talks/hans_rosling_reveals_new_insights_on_poverty.html

Niek Timmers <Timmers=>riscure.com>

R

P

7
SF

Security Metrics.

Onsight Solutions delivers network security and application delivery solutions to middle sized and large companies. Onsight believes that offering just the security hardware does not add any security. Managing this hardware in a proper way does. Offering support or full management of those devices is the core business of Onsight.

But how can you measure security (or maybe you measure the change a hacker is able to attack your environment ?) What factors are important when you try to grade the security solution at a company. And can those factors be monitored in an automated way 24x7? What tools are required? Or maybe some human resources? When it becomes possible to define a number, and compare this with a certain level (which might be a standard or a contracted value), it becomes possible to monitor the security solution. In case the security level drops, an engineer gets alarmed to take some action. So the level of security is maintained. Beside this, a number for security is easy to understand for people who have to decide about the budget, but don't know anything about IT security (think about the IT managers/Security officers).

Main question: "How can you assign a number to the quality of a security solution at a company?". Some subquestions: What parameters are important to measure? And what is the importance of each factor? What tools are required to measure this? For example you can think about honeypots, port scans, vulnerability scans, etc. And maybe some external data sources are required, to gather security information?
Roel van der Jagt <roel.van.der.jagt=>onsight.nl>

R

P

9
SF

Building a more resilient TOR.

The Onion Router (TOR) network is an overlay network that allows users to browse the Internet anonymously. By relaying the traffic between three randomly picked TOR nodes, it is virtually impossible to trace where a request was sent from and what the final destination of a packet is. Over the years, many researchers have developed theoretical and practical attacks against the TOR environment, which has eventually led to a more resilient TOR network.

We want students to look into current weaknesses of the TOR ecosystem from multiple points of view. As an example, it is currently possible to determine whether a HTTP request has been routed through the TOR network in multiple different ways. On the other hand, it has become increasingly more difficult over the years to enumerate the bridge nodes that are used as entry points into TOR.

Research at KPMG IT Advisory can be challenging. We strive for the best results and therefore invest a considerable amount of time in you, to help you achieve the best. But to succeed together we require fully determined students that would like to go the extra mile. For this research, it would be beneficial if candidates are already familiar with the concepts of TOR, i.e. by running their own entry nodes.

The RP topics as stated on the website are fixed but we are open to changes in the exact research approach if the student prefers. We encouraged students to come up with own ideas and approaches. During the short intake interview your are invited to bring your ideas and approaches to the table. We use the intake to select the students who will get the opportunity to perform their research project at KPMG.
Marc Smeets <smeets.marc=>kpmg.nl>

R

P

10
SF

Distributed Password Cracking Platform - the final step.

Cracking of password hashes has many reasons. During IT audits we crack to test the effectiveness of a password policy, and during security tests we crack to further penetrate into a network. KPMG IT Advisory performs both assignments continuously and password cracking is a day-to-day activity. In order to fulfill the demands of our team to crack passwords we have a setup that consists of a CPU cluster (~70 CPU cores with John-MPI) and a GPU box (5 GPU cards, many different tools) is used for specific cracks when GPU power is faster. However, the current setup can be further optimized in usability and performance.

This research project is a new - possibly final -  step in a larger set of research projects, starting from 2010. Previous students researched the early stages of overall power of cracking via GPU (Bakker and Van der Jagt, 2010), the power of GPU cracking specifically for one hash type (Sprengers 2011), ways for combining GPU and CPU cracking power (van Kerkwijk and Kasabov 2011) and the design of a distributed password cracking platform (Pavlov and Veerman). Now it is time to combine the previous research and work towards the actual implementation of a easy to use, multi-cracking tool, multi-architecture, high performing and intelligent password cracking platform.

Your goals will be:
  • Review and optimize the current design to update to current developments where needed
  • Cracking strategy was a topic that was identified as important but was not previously completed: research cracking strategies that combine CPU and GPU cracking, dictionary, brute force and rainbow table cracking for a defined set of hash types (LM, NTLM, MD5, SHA256, etc to be further defined). The strategy should be adaptive to multiple circumstances (different amounts of hashes, different types of hashes, current load on the cluster, etc).
  • Perform the actual implementation in our lab
The research is an example of combining skills of system and network engineers with the skills of security testers.

Research at KPMG IT Advisory can be challenging. We strive for the best results and therefore invest a considerable amount of time in you, to help you achieve the best. But to succeed together we require fully determined students that would like to go the extra mile.

The RP topics as stated on the website are fixed but we are open to changes in the exact research approach if the student prefers. We encouraged students to come up with own ideas and approaches. During the short intake interview your are invited to bring your ideas and approaches to the table. We use the intake to select the students who will get the opportunity to perform their research project at KPMG.
Marc Smeets <smeets.marc=>kpmg.nl>

R

P

11
SF

Feasibility of attacks on weak SSL ciphers.

Weak SSL ciphers have been around since many years. In theory many ciphers are cracked. But in current networks we find that the usage of weak ciphers is still very common. In practice only a few attempts have been successful, with EFF’s FPGA attack on DES with COPACOBANA being a noteworthy one. Many other ‘theoretically cracked’ weak ciphers are still not easy to crack in practice.

We would like the students to research the feasibility of cracking weak ciphers used. The research can include the entire process from intercepting communication, extracting the data used for attack, select best way of cracking, perform crack and uncover the secrets. Ideally, the research results in a statement on the feasibility of cracking these weak ciphers. What ciphers exactly to be included will be selected at the start of the research.

Research at KPMG IT Advisory can be challenging. We strive for the best results and therefore invest a considerable amount of time in you, to help you achieve the best. But to succeed together we require fully determined students that would like to go the extra mile.

The RP topics as stated on the website are fixed but we are open to changes in the exact research approach if the student prefers. We encouraged students to come up with own ideas and approaches. During the short intake interview your are invited to bring your ideas and approaches to the table. We use the intake to select the students who will get the opportunity to perform their research project at KPMG.
Marc Smeets <smeets.marc=>kpmg.nl>

R

P
1
12
N

IPV6 risks and vulnerabilities.

Because the world starts to adopt IPv6 more and more we also run into the security problems involved with this migration/adaptation. Of course IPv6 has built-in security, compliance with IPSec is mandatory in IPv6, and IPSec is actually a part of the IPv6 protocol. IPv6 provides header extensions that ease the implementation of encryption, authentication, and Virtual Private Networks (VPNs). IPSec functionality is basically identical in IPv6 and IPv4, but one benefit of IPv6 is that IPSec can be utilized along the entire route, from source to destination. But it can be assumed that only enforcing the use of IPSec within IPv6 isn't solving all security problems and therefore there is a need to research to see what the vulnerabilities and risks are during the use of IPv6. For this research project the focus is on the vulnerabilities that exist because of the lack of using secure techniques and protocols in combination with IPv6.

R.F.Visser <rene.visser=>govcert.nl>

R

P
1
13
F

Determining camera model from JPEG quantization tables.

Acceleration methods for searching image databases, for example through optimizing search through quantization tables in JPEG. Some investigation has been done on how this JPEG characteristic can be used by such methods, but further investigation should give a better view on its feasibility. Other JPEG characteristics not yet exploited by any search method in current use may be investigated as well. These methods are used to search for images that have, for example, deviant or specific values for these characteristics. Certain values may indicate the use of a camera of some kind, or that it has been altered (or recreated) by specific image editing software. A proof-of-concept that shows the use of such characteristics in search methods will probably be implemented.

Netherlands Forensics Institute.
Marcel Worring <m.worring=>uva.nl>
Zeno Geradts <zeno=>holmes.nl>

R

P

16
F

XIRAF Web GUI effectiveness.

The NFI developed a forensic analysis system (XIRAF) that supports investigators in finding traces in digital material via a Web interface. During the last year several hundred users used the interface for investigation. The Web GUI is developed by technicians rather than GUI developers. We ask the student to:
  1. Inventory user feedback on the Web GUI;
  2. Match the GUI against Web design methods available in theory and practice;
  3. Possibly design (and implement) an improved GUI.
Zeno Geradts (DT) <zeno=>holmes.nl>


R

P
2
23

Electro magnetic fault injection Characterization.

Fault injection attacks are active and either non-invasive (voltage, clock) or semi-invasive attacks (laser) based on the malicious injection of faults. These attacks have proven to be practical and are relevant for all embedded systems that need to operate securely in a potentially hostile environment. Electromagnetic fault injection is a new fault injection technique that can be used during security evaluation projects. The student  working  on this project will be using the tooling provided by Riscure.

A previously conducted RP project, by Sebastian Carlier, focused on the feasibility of EMFI (see: http://staff.science.uva.nl/~delaat/rp/2011-2012/p19/report.pdf). Another previously conducted RP project, by Albert Spruyt, focused on understanding fault injected in the powerline (see: http://staff.science.uva.nl/~delaat/rp/2011-2012/p61/report.pdf).  This project will focus on extending the work performed by Sebastian and Albert.

The goal of this project is:
  • Create a EMFI fault injection setup (Sebastian's work)
  • Extend the fault injection framework (Albert's work)
  • Correlate results with Albert's results
Research question: Are faults introduced using EMFI comparable to faults injected in the powerline?

The following deliverables are requested from the student:
  • A clear description of the performed tests and their results
  • Recommendations for future testing
Topics: Fault injection, EMFI, low level programming, assembly, microcontroller, electronics.
Note: This project can be combined with "Optical fault injection characterization".
Niek Timmers <Timmers=>riscure.com>



24

Optical fault injection characterization.

Fault injection attacks are active and either non-invasive (voltage, clock) or semi-invasive attacks (laser) based on the malicious injection of faults. These attacks have proven to be practical and are relevant for all embedded systems that need to operate securely in a potentially hostile environment. Optical fault injection is a technique performed commonly within Riscure. The student working on this project will be using the tooling provided by Riscure.

A previously conducted RP project, by Albert Spruyt, focused on understanding the effects of voltage fault injection (see: http://staff.science.uva.nl/~delaat/rp/2011-2012/p61/report.pdf).

This project will focus on extending the work performed by Albert by focusingon laser fault injection.
The goal of this project is:
  • Create a optical fault injection setup using Riscure's tooling (do you  mean to create a laser  FI setup?)
  • Study the effects of laser fault injection
  • Compare and analyze the effects of laser fault injection  with voltage (powerline) fault injection
Research question:  Are laser fault injection techniques comparable to powerline faults injected?

The following deliverables are requested from the student:
  • A clear description of the performed tests and their results
  • Recommendations for future work
Topics: Fault injection, optical, laser, low level programming, assembly, microcontroller, electronics.
Note: This project can be combined with "Electro magnetic fault injection characterization".
Niek Timmers <Timmers=>riscure.com>
R

P

25
N

Optical paths in DAS-4.

The SNE group is installing an optical cross connect (DAS-4 switch) that will allow to create path on demands between four locations on the DAS-4 cluster. Dynamic paths between clusters can improve applications performance and contribute to a more energy conscious use of equipment. The student will evaluate the current code used to control the switch and modify and improve it if needed. He will define and configure a control and monitoring system for the switch using webservices. He will identify the most useful topology setup in relation to existing and foreseen applications. Finally the student will identify the steps to integrate the switch in the OpenNSA fabric.

Knowledge of Python and LabView is required

Ref; http://www.cs.vu.nl/das4/research.shtml
Ref; http://www.surfnet.nl/nl/Innovatieprogramma%27s/gigaport3/Documents/GP3-2010-PHO-8R-Gamage-GovBrd.pdf
Ref: https://bitbucket.org/myzt/das4_pxc/
Ralph Koning <R.Koning=>uva.nl>
Paola Grosso <p.grosso=>uva.nl>



26
SN

Automatic services composition in GEMBus/ESB based Cloud PaaS platform.

This project will investigate Cloud Platform as a Service (PaaS) implementation based on GEMBus (GEANT Multidomain Bus) and ESB (Enterprise Service Bus) and look for solutions to automatically compose and deploy services. ESB is a widely adopted SOA platform for Enterprise Information System (EIS) and GEMBus is ESB extension for distributed multi-domain services.
The project will require investigating one of the ESB platforms, e.g. FUSE ESB, and finding which functional and configuration components need to be added to the ESB platform to support automated services composition and dynamic configuration.
Simple services composition scenario and prototype need to be demonstrated.
This work will contribute to the on-going project and may be resulted in a joint paper or conference presentation.
Yuri Demchenko <y.demchenko=>uva.nl>



27
SN

Load balancing in ESB based service platform.

This project will investigate load balancing solutions provided by major existing Enterprise Service Bus (ESB) implementation such as Fuse ESB or Apache ServiceMix, writing and running simple test program and collecting statistics. ESB is a widely adopted SOA platform for Enterprise Information System (EIS).
The project will require investigating one of the ESB platforms, e.g. FUSE ESB in particular those that are responsible for load balancing and intelligent message queuing Normalised Message Router and ActiveMQ.
Simple load balancing scenario for installation of a few services depending on different message traffic patterns need to be demonstrated.
This work will contribute to the on-going project and may be resulted in a joint paper or conference presentation.
Yuri Demchenko <y.demchenko=>uva.nl>



33
SN

Live IT Infrastructure Requirements Verification.

Context

The environment for the research project is the Information Services organization of Air France- KLM. In this organization the datacenter is responsible for the management of the business applications and the underlying system and network infrastructure. The applications management department of the datacenter has defined a concept called The Artificial IT Intervention Handler (AITIH). This concept is realized as an AGILE/Scrum project.  One of the functions in this concept is a Blueprint Generator. A Blueprint is a graphical representation of infrastructure components of the system and network infrastructure showing servers and its connectivity to the LAN and SAN network.
IST situation of the IT infrastructure

Auto discovery information is collected every day by system and network monitoring tools. This information shows the actual status of the IT infrastructure.
This information is stored in a database for analysis. Blueprints can be generated from this database using a proprietary tool based on SVG.

SOLL situation of the IT infrastructure

IT architects are involved in the development and change process of business applications. They are responsible for the IT Global Design (ITGD) of the underlying infrastructure for the business applications. An ITGD is part of the documentation of a business application. IT architects define the principles that should be used when designing a particular application infrastructure.

Research question

Business applications have non-functional requirements for the infrastructure. The ITGD defines the non-functional requirements.  Availability is the most important infrastructure requirement for business applications.

The research questions are:
  1. Define an architectural governance procedure that is able to detect deviations between the ITGD design and the actual infrastructure implementation (auto discovery status).
  2. One of the challenges for the AITIH is to automate Architectural Governance. How can the pattern generator be enhanced to detect deviations from the design automatically based on applicable design rules?
Betty Gommans <betty.gommans2=>klm.com>



38
N

Power-aware application characterization.

The increasing heterogeneity of applications has lead to low-accurancy energy prediction using simple power model, and it also introduced uncertain result on  power management. Can we find performance metric(s) or attribute(s) to characterize the applications according to their impact on energy consumption of computer system? If they exist, power-aware classification of applications can be performed for fine-grained  power models and  effective power management. A simple application-cognizant power management can be proposed to validate the effectiveness of classification.
Hao Zhu <H.Zhu=>uva.nl>
Karel van der Veldt <karel.vd.veldt=>uva.nl>
Paola Grosso <p.grosso=>uva.nl>



41
S
F

Android Pattern Unlock.

Android provides an unlock mechanism which is not based on the traditional pincode, but visual patterns for unlocking. Drawing the correct pattern unlocks the phone. This is an efficient way of creating easy to remember "passwords" which are hard to brute force. You will research different possibilities to bypass this security. We are especially interested in conceptually breaking this (e.g. brute forcing in an efficient way) rather than using an implementation error in certain implementations. You will get an Android device which you can use as you want during your research.
Henri Hambartsumyan <HHambartsumyan=>deloitte.nl>
Martijn Knuiman <MKnuiman=>deloitte.nl>
Coen Steenbeek <CSteenbeek=>deloitte.nl>



43
F

Modelling IT security infrastructures to structure risk assessment during IT audits.

As part of annual accounts IT Audits are executed to gain assurance on the integrity of the information that forms the annual statement of accounts. This information is accessible from an application layer, but also from a database layer. An audit focusses on different parts of the infrastructure to get sufficient assurance on the integrity of information. Different parts of the infrastructure are dependent on each other and because of this there is correlation possible between the different layers.

This research project focusses on the correlation between different infrastructure layers and the automation of performing an IT audit. By making use of reporting tools like QlikView, we would like to create a PoC to verify if specific audit approaches can successfully  be automated.
Coen Steenbeek <CSteenbeek=>deloitte.nl>
Derk Wieringa <DWieringa=>deloitte.nl>
Martijn Knuiman <MKnuiman=>deloitte.nl>

R
P
2
44
SN

Multicast delivery of HTTP Adaptive Streaming.

HTTP Adaptive Streaming (e.g. MPEG DASH, Apple HLS, Microsoft Smooth Streaming) is responsible for an ever-increasing share of streaming video, replacing traditional streaming methods such as RTP and RTMP. The main characteristic of HTTP Adaptive Streaming is that it is based on the concept of splitting content up in numerous small chunks that are independently decodable.  By sequentially requesting and receiving chunks, a client can recreate the content. An advantage of this mechanism is that it allows a client to seamlessly switch between different encodings (e.g. qualities) of the same content.
There is a growing interest from both content parties as well as operators and CDNs to not only be able to deliver these chunks over unicast via HTTP, but to also allow for them to be distributed using multicast. The question is how current multicast technologies could be used, or adapted, to achieve this goal.
Ray van Brandenburg <ray.vanbrandenburg=>tno.nl>

R
P

46
F

YouTube-scanner.

Goal:
More and more videos are being published on YouTube that contain content which is such that you want to find it soon after upload. The metadata associated with videos is often limited. Therefore, the selection has to be based on the visual content of the video.

Approach:
Develop a demonstrator that automatically downloads and analyses the latest YouTube videos. The demonstrator should operate in a two stage process: first, make a content-based selection of the most relevant video material using the screenshot that YouTube provides for every new video. In case the video is considered relevant, download the entire video for full analysis. Use available open source tools such as OpenCV.

Result:
Demonstrator for the YouTube-scanner.
Mark van Staalduinen <mark.vanstaalduinen=>tno.nl>



47
F

Project-X monitor.

Goal:
Friday night 21 September, 2012 the village of Haren was the scene of serious riots. The riots followed a Facebook invitation for a "sweet 16" party. The activities on Twitter surrounding this Project-X event can provide clues on the course of events turning a birthday party into a fighting ground. Maybe, thorough analysis can help to prevent such escalations in the future, or at least influence the course of events, using a live monitoring system.

Approach:
TNO has performed a first analysis on the more than 700.000 tweets around Project-X. Use the results of this analysis (plus additional information you extract yourself from the tweets using data mining techniques), to develop a concept for a live monitoring system that generates an alert when alarming Twitter activity is detected surrounding a scheduled event. Examples include retweet-explosions, tweets from influential Twitter accounts, certain use of language and hash tags, hoaxes, etc.

Result:
Presentation of the concept for a live Twitter monitoring system.

REQUIREMENT: good knowledge of the Dutch Language!
Martijn Spitters <martijn.spitters=>tno.nl>


1
48

Cross-linking of objects and people in social media pictures.

Goal:
Automatically cross-link persons and objects found in one social media picture to the same persons and objects in other pictures.

Approach:
Develop a concept and make a quickscan of suitable technologies. Validate the concept by developing a demonstrator using TNO/commercial/open-source software. Investigate which elements influence the cross-linking results.

Result:
Presentation of the concept and demonstrator.
John Schavemaker <john.schavemaker=>tno.nl>

49

What internal company data can you find outside of the company?

The cloud is a very useful and versatile tool, which is often even free for the users. It allows easy sharing, no implementation or integration cost and accessible is from everywhere. This can, however, easily lead to violation of security policies and storing data in places outside of the control of the company owning the data. During this project you investigate what sort of data is stored outside of the bank (in for example Prezi, DropBox, PastBin, etc.) using Google Hacking, proxy logs, etc. Additionally we intend to device a strategy on how information leakage through these channels can be detected and minimized in a practical way. (This is harder than it sounds)
Steven Raspe <steven.raspe=>nl.abnamro.com>



51

Automated SSL health assessment.

It has become a real fad for researchers to try and break SSL over the last few years. Several attacks have been published with illustrious names like "BEAST", "CRIME" and "Lucky 13", and issues have been discovered both
on the protocol level as in the various ciphers that can be used.

In this day and age where almost everything is a webservice, organisations usually have many dozens, if not more, of SSL services running. Combined with the number of flaws already discovered, it gets hard to ensure that all these are of the proper security level and that it remains that way.

This project has the following goals:
  • Assess the various potential problematic uses of the SSL protocol and ciphers based on literature.
  • Create a tool that given a list of urls/hosts and port numbers, evaluates which protocols and ciphers are offered and present per host a list of results for various potential problems, like the attacks described earlier but also things like certificate validity or chain issues. The output should be machine-parsable so it can be integrated into monitoring infrastructure. Ideally it should summarise the "SSL health" of a host in a single metric. It should be an extensible framework so that if a new problem or attack is discovered, the tool can be easily updated.
  • Run the tool against all our known or discovered SSL services.
Thijs Kinkhorst <thijs=>uvt.nl>
Teun Nijssen <teun=>uvt.nl>



52
SN
F

DDos Attacks & Electronic Payment systems.

P.S. 52 and 53 are the same but contains enough research questions to make it into two distinct rp's.

Equens is the first pan-European full-service payment processor. We are at the forefront of payment and card transaction processing. Maintaining the integrity of our networks is essential and as the nature of payments change, making use of the public internet, additional measures have to be considered to ensure that Equens can handle the risks associated with this mechanism. These risks can be identified in many forms and currently, possibly the most significant are related to (Distributed) Denial of Service (DDoS) attacks.
DDoS attacks are becoming an increasing threat in the cyber-world, both with regard to the chance of becoming a victim as well as the impact of such an attack. At least that is what is perceived from information from the media and security experts.

Equens wishes to understand the risks better, and in particular the risks associated with Distributed Denial of Service attacks. To this end we are proposing that a study be performed.
At this time the following subjects are considered relevant. The successful candidate(s) may concentrate on one or more subjects as applicable:
  • The risk of Distributed Denial of Service (DDoS) attacks at this time and the anticipated development of these attacks. In particular aspects such as:
    • What is the trend in dDoS attacks in relation to line of business (including financial risk) business size, geographical location (from a victim's point of view) and other parameters like technical advancement (type), duration, bandwidth, ... (from an attackers point of view)?
  • The types of mechanism available to mitigate DDoS attacks and anticipated development. In particular aspects such as:
    • What is the best remedy against such an attack, both theoretically as well as based on the solutions available in the market (with a relation to company size/price-performance) These questions can then be applied to Equens' services, differentiated towards their visibility: public, private, or semi-private and based on Equens' position in the European market.
  • Experience of other organisation(s) with DDoS and how they have managed their approach to DDoS.
The authors of this study should have the following experience
  • A basic understanding of TCP/IP and the various other protocols that together form what is termed Internet (DNS, IPSEC etc.) "Learning on the Job", that is being assisted by Equens' experts in this area, will be provided;
  • Able to discuss network issues both with Equens' own experts, as well as necessarily collect information from external sources;
  • The ability to be analytical and produce an analytical, subject based report.
Additional points
  • The candidate(s) will form part of a small expert team that is essentially self-managed. Therefore the candidate(s) will be expected to be self-motivated and capable of performing most activities with little or no support. However advice and assistance in contacting the various current stakeholders and our suppliers etc. will be provided;
  • The team will allocate time to assist the candidate on a regular basis and will provide timely advice during the entire project.
The deliverables will be defined by the expert team in discussion with the candidate. It is thought that the following will be produced:
  • A single report (per subject or group of subjects) in which the various current initiatives are described and compared with each other.
  • The produced report will be owned by Equens, but after suitable review (for example making certain parts of the report anonymous etc.) may be used by the candidate as part of their work experience and CV etc.
Stefan Dusée <Stefan.Dusee=>nl.equens.com>



53
SN
F

DDos Attacks & Electronic Payment systems.

P.S. 52 and 53 are the same but contains enough research questions to make it into two distinct rp's.

Equens is the first pan-European full-service payment processor. We are at the forefront of payment and card transaction processing. Maintaining the integrity of our networks is essential and as the nature of payments change, making use of the public internet, additional measures have to be considered to ensure that Equens can handle the risks associated with this mechanism. These risks can be identified in many forms and currently, possibly the most significant are related to (Distributed) Denial of Service (DDoS) attacks.
DDoS attacks are becoming an increasing threat in the cyber-world, both with regard to the chance of becoming a victim as well as the impact of such an attack. At least that is what is perceived from information from the media and security experts.

Equens wishes to understand the risks better, and in particular the risks associated with Distributed Denial of Service attacks. To this end we are proposing that a study be performed.
At this time the following subjects are considered relevant. The successful candidate(s) may concentrate on one or more subjects as applicable:
  • The risk of Distributed Denial of Service (DDoS) attacks at this time and the anticipated development of these attacks. In particular aspects such as:
    • What is the trend in dDoS attacks in relation to line of business (including financial risk) business size, geographical location (from a victim's point of view) and other parameters like technical advancement (type), duration, bandwidth, ... (from an attackers point of view)?
  • The types of mechanism available to mitigate DDoS attacks and anticipated development. In particular aspects such as:
    • What is the best remedy against such an attack, both theoretically as well as based on the solutions available in the market (with a relation to company size/price-performance) These questions can then be applied to Equens' services, differentiated towards their visibility: public, private, or semi-private and based on Equens' position in the European market.
  • Experience of other organisation(s) with DDoS and how they have managed their approach to DDoS.
The authors of this study should have the following experience
  • A basic understanding of TCP/IP and the various other protocols that together form what is termed Internet (DNS, IPSEC etc.) "Learning on the Job", that is being assisted by Equens' experts in this area, will be provided;
  • Able to discuss network issues both with Equens' own experts, as well as necessarily collect information from external sources;
  • The ability to be analytical and produce an analytical, subject based report.
Additional points
  • The candidate(s) will form part of a small expert team that is essentially self-managed. Therefore the candidate(s) will be expected to be self-motivated and capable of performing most activities with little or no support. However advice and assistance in contacting the various current stakeholders and our suppliers etc. will be provided;
  • The team will allocate time to assist the candidate on a regular basis and will provide timely advice during the entire project.
The deliverables will be defined by the expert team in discussion with the candidate. It is thought that the following will be produced:
  • A single report (per subject or group of subjects) in which the various current initiatives are described and compared with each other.
  • The produced report will be owned by Equens, but after suitable review (for example making certain parts of the report anonymous etc.) may be used by the candidate as part of their work experience and CV etc.
Stefan Dusée <Stefan.Dusee=>nl.equens.com>



58
N

Quarantainenet.

Quarantainenet uses DNS-detection as one of its sensors when monitoring a network for malware, by matching DNS requests against known bad domains (blacklists). Another, so far untested, aspect of DNS-detection is using DNS MX-requests to detect computers that are sending spam. By looking at parameters like requests per interval, number of different requests and requests for specific domains, we suspect that it is possible to create a model to predict the probability that a computer is indeed sending spam.

Supervisors:
  • Administrative and overview: Casper Joost Eyckelhof
  • Technical content: Bas van Sisseren
Casper Joost Eyckelhof <support=>quarantainenet.nl>

60
S
Efficiently unpacking YAFFS2 file systems.

Abstract: The YAFFS2 file system is popular on a particular subset of embedded Linux devices, especially Android devices such as tablets and phones. The YAFFS2 file system is different from other popular file systems on Linux, since it is much more tied to characteristics of for example the flash chip. There are many variants around, also because YAFFS2 has not (yet) been merged into the mainline kernel and many vendors are using forks or snapshots of the YAFFS2 code, which introduces slight differences.

There are a few unpacking tools which can unpack a limited subset of YAFFS2 file systems and they come with other limitations such as segfaults, limits on the amount of files in the file system, and so on. Add to that that the file system is underdocumented, has no header that indicates where the file system starts (unlike for example ext2fs), a wide variety of configuration options and a very large install base and you can start to imagine that this is problematic for people doing for example license compliance, or forensics.

Your task would be to investigate the structure of the YAFFS2 file system, using the C code from the Android Linux kernel, existing documentation, other unpacking tools that are available and various example file systems, and create a much better solution than is available right now. Your solution should be licensed under the GNU GPL version 2+ license.
Armijn Hemel <armijn=>tjaldur.nl>



61

(semi-)known plaintext attack against files encrypted with E-SafeNet.

Abstract: Device manufacturers and their suppliers in China are increasingly using encryption to make it hard for competitors to reuse code, even though the code in question is the Linux kernel which has been released under the GPLv2. Since companies in the consumer electronics industry go belly up very frequently it would not be the first time that source code gets lost, putting companies downstream of the supply chain in a very uncomfortable position of not being able to comply with license conditions and having their product taken off the market. It also makes it a lot harder to do license compliance audits and security audits.

One tool that is used for this is from a Chinese company called E-SafeNet. I recently obtained an archive with "source" code, containing U-Boot and the Linux kernel for an Android device. The encryption seems to be block based and I have (partial) source code which would make it interesting to perform a known plaintext attack on the encryption.

Your task would be to find out more about the encryption and if possible break it!
Armijn Hemel <armijn=>tjaldur.nl>



62
N

Rich identity provisioning.

In order for the next phase of the internet to be as open and user centric as the past, end users of the internet should be in control of the mechanisms and credentials with which they use internet services and collaborate with others. There are a number of well known and lesser known technologies already in existance as building blocks for federation for this emerging future - notable openID, browserID, OAuth 1/2, U-Prove and XRI/Webfinger next to older technologies such as X509 certificates, Radius and PGP. Each provides another piece of the puzzle and the use cases for each of them vary as much as their adoption. This means that in order for the end user to remain flexible internet service providers should aim at supporting multiple mechanisms in parallel. The project will investigate the best possible architecture to create an integrated polyglot identity provisioning system that allows for pseudonimity, and identify possible open source components that could be integrated in such a solution.
Michiel Leenaars <michiel=>nlnet.nl>



63
N

Automated migration testing.

Unattended content management systems are a serious risk factor for internet security and for end users, as they allow trustworthy information sources on the web to be easily infected with malware and turn evil.
  • How can we use well known software testing methodologies (e.g. continuous integration) to automatically test if available updates to software running on a website that fix security weaknesses can be safely implement with as minimal involvement of the end user as possible?
  • How would such a migration work in a real world scenario?
In this project you will at the technical requirements for automated migration testing, and if possible design a working prototype.
Michiel Leenaars <michiel=>nlnet.nl>



64
N

Federated microblogging benchmark.

Microblogging is a type of service where short public text messages are published on the internet, and semi-private messages can be exchanged between internet users through an intermediary. Popular services include market leaders Weibo, Twitter.com as well as corporate solutions like Yammer.  Many of these are centralised commercial services very limited in scope by their business model. The services are increasingly controversial because of the closed nature of the services, their volatility in API's for developers (not based on published standards, sometimes crossing the line of platform provider and competing directly with external developers), the lack of openness for external developers and the fact that in many cases privacy-infringing user data from both users and their followers is being sold and/or exploited in the background. Typically it is not possible to communicate outside of the closed network.

Decentralised federated microblogging solutions like Pump.io, Status.net, Buddycloud and Friendica hold significant promise to improve the situation, especially when these free and libre solutions become part of the standard hosting/access package of internet users. If we can make the technology automatically available to every internet user through their ISP's and/or hosting providers, adopting the same user credentials they use with email, it would allow for automatic discovery across the internet and zero configuration integration with existing tools (e.g. mail clients, instant messaging software) as well as 'identity ownership' for the end user. This opens the possibility of being able to automatically 'follow' users of any email address (provided they belong to the 23% of users that want this), allow closed user groups, hidden service discovery and serious user-determined cryptography.

The research project looks into the various (open source) technologies which are available, and makes recommendations for inclusion into the project. What are the most mature solutions, in features and in implementation quality? To what extent are upcoming standards such as OStatus (suffiently) supported? What important features are not yet standardised? What are the performance characteristics of the current federated microblogging solutions? What could be good, horizontally scalable deployment strategies?
Michiel Leenaars <michiel=>nlnet.nl>



65
N

Migration models for hosting companies.

In this project you will look at the typical setup of different classes of hosting companies.
  • What is their technical architecture?
  • How is responsibility for maintenance delegated, and what are the biggest maintainance costs?
  • What are their business requirements for an upgrade of the software part of their technical infrastructure?
Given that their server racks will be underpovisioned and oversubscribed, can we devise any models to migrate such a business with minimal extra dependencies? For instance a cloud supported migration model where some or all services are temporarily moved to PaaS providers. How would such a model look, and how can we successfully demonstrate that such an approach is feasible?
Michiel Leenaars <michiel=>nlnet.nl>



66
N

Privacy aspects for LDAP service.

It is possible for domain owners to publish contact information and structured business information in LDAP, using a simple SRV record in DNS. One thing is missing: control of who can access what information. With proper control, we even envision moving towards a contact-relationship model that could for the basis for federated social media, in a modular and self-controlled fashion.

When using a pseudonym composed of a username under a domain, one typically wants to offer a controlled amount of information to remote peers who query for it. Specifically, it is not desired that a simple LDAP search yields all pseudonyms, but once a pseudonym is known to someone it should be possible to access all information related to it.

What this means is that not all LDAP attributes are public to everyone. There is a place for attributes that only show up when matched exactly with the base DN or the search filter.  Examples may be mail (spammers should not be able to retrieve all mail addressed from your LDAP) and uid (which provide a hint to attackers) but in general it should be configurable. There will be some impact due to the way search filters are constructed in LDAP.

In this research project you will first design a test suite that can properly determine the desired characteristics. You will then create a proof of concept "overlay" plugin for OpenLDAP implementing this facility, and if possible demonstrate that previously feasible but undesirable retrievals are avoided.
Michiel Leenaars <michiel=>nlnet.nl>



67
N

Fan-out LDAP service.

Many services are optimally set up as an outgoing service and a separate incoming service. This provides clarity and consistency in the maintenance of those services. For LDAP, a similar approach is not common, but it can be highly beneficial. Specifically, it could unleash contact information of the entire world onto your desktop, in your mailtool, and it could link various types of contact information.

The specific application that would enable this is a "smart LDAP proxy" that takes LDAP queries from relatively straightforward clients, such as ones for an Android phonebook or a Contacts tool on your desktop; the proxy would aim to go out to the Internet to retrieve as much as possible information from remote LDAP servers, including even the public server for one's own domain.  The approach of fanning out is already partially implemented in the DNSSRV backend, but it can be enhanced by looking up mail addresses, URIs and so on when these are presented, and could even interpret domain- or email-shaped name queries as potential remote lookups to proxy.  To make this even more powerful, there already is a "Translucent Proxy" overlay in OpenLDAP that enables private annotations and modifications to information retrieved from such remote LDAP servers.

As a use case, think of a simple dictionary lookup utility for contact gathering; entering an email address, someone@example.com rewriting it to uid=someone,dc=example,dc=com and then looking it up through the SRV record _ldap._tcp.example.com to learn more about this person. Tools like mailers could do this too, of course.  And the query might return alternate forms of contact, such as XMPP and SIP addresses, as well as a website.  And you could do this if they hadn't, and make notes on the person that you contacted in the past.  Things suddenly start to integrate!

In this research project you will create a proof of concept "backend" plugin for OpenLDAP implementing this facility. Subsequently you will test it with the existing "translucent proxy" overlay to manipulate contact information locally, and investigate what clients work well with this approach on the various desktop and mobile platforms.
Michiel Leenaars <michiel=>nlnet.nl>



69
N

Exploring problem areas of newly developed telecom technology "LTE Direct".

A hot new topic under development in the telecommunications world is "Proximity Based Services", also referred to as "LTE Direct". LTE Direct is
an improvement of the way mobile devices can discover services that are available in the local area as well as an improvement on the way these
mobile devices can communicate with those services available in the local area.

Establishment of communication using LTE Direct is fundamentally different from establishment of communication based on e.g. Bluetooth or LTE Direct.
In contrast with the existing approaches for direct communication, Radio Network Spectrum licensed to Mobile Operators is used with LTE Direct. Mobile operators want and need to be in control of the usage of their licensed spectrum. This results in new requirements for LTE Direct compared to
  • e.g. Bluetooth and LTE Direct,
  • e.g. because the mobile operator wants to charge for the usage of its spectrum or e.g. resulting from regulatory requirements such as Lawful Intercept.
The purpose of this research is to look into the LTE Direct concept to see what new  issues this way of direct communication between mobile devices raises, explore one or more of these issues and to examine proper solutions. If new solutions are developed during your assignment at TNO, TNO is willing to file a patent application of this new solution on your behalf.
Wissingh, B.F. (Bastiaan) <bastiaan.wissingh=>tno.nl>



71

Attacking MDM systems.

Mobile. Private devices. Corporate data. Bring your own device and work from it. You can love it or hate it, but it is here. Unfortunately classical lockdown procedures cannot be applied to secure these devices. Besides legal and privacy related issues another interesting domain is becoming more critical: physical security. In order to manage the risks involved around mobile, corporates are rolling out Mobile Device Management (MDM) systems in order to monitor and control devices that are hooked onto corporate data or corporate infrastructure. But how secure are these solutions? There are already known methods to hide rooted or jailbroken statuses to applications. Test your reverse engineering skills and see how you can manipulate these control systems.
Henri Hambartsumyan <HHambartsumyan=>deloitte.nl>
Martijn Knuiman <MKnuiman=>deloitte.nl>
Coen Steenbeek <CSteenbeek=>deloitte.nl>



72

Creating your own Internet Factory.

One of the biggest problems in computer networks is the lack of flexibility to support innovation. It is widely accepted that new network architectures are required. Given the success of cloud computing in the IT industry, the network industry is now driving the development of software-based networks. Software-based networks allow deployment and management of network architectures and services through resource virtualization. Ultimately, a program can describe the blueprint of the software-based network, i.e. its deployment, configuration, and behavioral aspects.

At TNO/UvA, we created a prototype of an Internet factory, which enables us to produce networks on-demand at many locations around the globe. In this work, we will develop a program using our prototype that produces Openflow networks (using OpenVSwitch, Daylight, and Floodlight). We will produce a number of interesting networks, e.g. one that finds better paths than Internet routing, one that is robust on failures of network elements, and one that offers larger capacity by combining multiple paths. Is is possible to capture years of experience and best practices in network design, deployment, and operations into a compiler?

http://youtube.com/user/ciosresearch
Rudolf Strijkers <strijkers=>uva.nl>



73

Practical OpenFlow: Real-Time Black-Hole of (D)DoS traffic.

In recent years DDoS attacks have grown from a nuisance to a real threat for ISPs. Most ISPs have a number of high capacity links (often >= 10Gbit) to the Internet backbone. DDoS mitigation solutions that can handle these kinds of traffic are very expensive and most ISPs are not able to afford them. A much better solution would be to use the existing network infrastructure (switches, routers), and give them some extra intelligence to drop malicious DDoS packets.

OpenFlow gives network administrators the ability to off-load most of the intelligence to an external controller. This also opens up the possibility to integrate additional intelligence into the basic packet forwarding. This project investigates the possibility to leverage this development to perform DDoS detection on the external controller and use the high capacity hardware of an OpenFlow switch to filter the malicious packets, without completely taking the target offline.

The inspiration was from a project performed by Sakura Internet [1]. They used sFlow with a custom script that instructs the controller through a REST API. Although testing the detection rate of this setup could be part of the project, a solution based solely on OpenFlow (so without the use of other / less widely accepted protocols) is preferred.

[1] http://packetpushers.net/openflow-1-0-actual-use-case-rtbh-of-ddos-traffic-while-keeping-the-target-online/
Hidde van der Heide <hidde.vanderheide=>os3.nl>



76
SF

Mobile app fraud detection framework.

How to prevent fraud in mobile banking applications
Applications for smartphones are commodity goods used for retail (and other) banking purpose. Leveraging this type of technology for money transfer attracts criminal organisations trying to commit fraud. One of many security controls can be detection of fraudulent transactions or other type activity. Detection can be implemented at many levels within the payment chain. One level to implement detection could be at the application level itself.
This assignment will entail research into the information that would be required to detect fraud from within mobile banking applications and to turn fraud around by building a client side fraud detection framework within mobile banking applications.
Steven Raspe <steven.raspe=>nl.abnamro.com>



79
F

Malware analysis NFC enabled smartphones with payment capability.

The risk of mobile malware is rising rapidly. This combined with the development of new techniques provides a lot of new attach scenarios. One of these techniques is the use of mobile phones for payments.
In this research project you will take a look at how resistant these systems are against malware on the mobile. We would like to look at the theoretical threats, but also perform hands-on testing.
NOTE: timing on this project might be a challenge since the testing environment is only available during the pilot from August 1st to November 1st.
Steven Raspe <steven.raspe=>nl.abnamro.com>



80
SN
F

What internal company data can you find outside of the company?

The cloud is a very useful and versatile tool, which is often even free for the users. It allows easy sharing, no implementation or integration cost and accessible is from everywhere. This can, however, easily lead to violation of security policies and storing data in places outside of the control of the company owning the data. During this project you investigate what sort of data is stored outside of the bank (in for example Prezi, DropBox, PastBin, etc.) using Google Hacking, proxy logs, etc. Additionally we intend to device a strategy on how information leakage through these channels can be detected and minimized in a practical way. (This is harder than it sounds)
Steven Raspe <steven.raspe=>nl.abnamro.com>




81
N

IP hijacking.

To derive consistently functional and correct IP routing tables from a fluxing menagerie of BGP advertisements is not a matter of mere collection. Autonomous Systems employ filtering strategies to select the best available route to a given destination. Because the Internet is dynamic in its interconnectedness, routing changes are commonplace, and route filtering can only aspire to produce an ideal routing table, never with absolute certainty. This uncertainty opens a window to malicious route advertisements, in which a claim is made that a given IP subnet (victim subnet) is reachable via an AS with no legitimate claim to that subnet (malicious AS). If such malicious data is accepted into a routing table of an AS (victim AS) then a successful event of 'IP address hijacking' has occurred. At Greenhost, a hosting provider in Amsterdam, we have observed such an attack in the wild.
  • How can we analyze aggregated BGP data from around the world to identify subnets the potential victims of IP hijacking?
  • How can we subsequently probe these at-risk subnets to gain additional positive or negative evidence of hijacking?
Greenhost is exploring possible answers to these questions through the development of analytical programs and distributed network probing agents.
Anatole Shaw <ash=>greenhost.nl> Douwe Schmidt <douwe=>greenhost.nl> Sacha van Geffen <sacha=>greenhost.nl>

82
N

DDoS attacks.

A distributed denial-of-service attack is an attempt to make a machine or network resource unavailable to its intended users. DDoS attacks are rising.  Recently many Dutch websites/services (bank, commercial, governmental) were unreachable because of DDoS attacks. Popular DDoS attacks generate abundant network traffic and thereby flood the network pipe of a machine or network node. Other attacks exhaust the processing power of the internet service.

Research questions:
  • how easy is it to DDoS an internet service?
  • which (internet) resources are available to start a DDoS?
  • what is needed (tools, infrastructure, design) in order to mitigate DDoS attacks?
  • Is there any correlation between the DDoS packets in an attack?
During this research project SURFnet will offer a special lab-environment that can be used to test the effectiveness of real internet DDoS’s. SURFnet also offer mitigation services that can be tested on their effectiveness SURFnet and HoneyNED, the Dutch Honeynet chapter, will supervise this research task.
Rogier Spoor <Rogier.Spoor=>SURFnet.nl>



84
F

NTFS index records timestamp manipulation.

The NTFS filesystem has numerous artifacts tracking temporal based information. Those artifacts can become key in an investigation, forming the bedrock of a timeline. For some of these artifacts it is known and demonstrated that modification is possible outside the regular update events. Thus introducing problem in the analysis phase, forcing investigators to always consider manipulation.

Index records ($i30) track the contents of directories (and server as an index for filtering and sorting functions). This NTFS structure also records timestamps for the files inside the directory. Would it be possible to manipulate these values in such a way a seasoned investigator will be fooled? This assignments includes both illustrating the possibility of manipulation using the schematics of NTFS, explaining possible telltale signs to detect manipulation and demonstrating the technique using a program allowing for modification.
Kevin Jonkers <jonkers=>fox-it.com>



85
F

Access rights en Access control lists in mailboxes within Exchange 2007 and 2010.

Mailboxes and mailbox folders in exchange have the possibility to granular add rights or access to individual folders or containers.
From a forensic point of view there is no easy way to determine these individual rights as an administrator. These access rights should be somewhere in the EDB database file of the exchange server.

How can we determine the exact access rights to each item, or folder within a mailbox. And can we determine changes in these access rights in time. i.e. can we see differences in ACL's throughout different backups of the EDB's.
Deliverables:
-    Do research on the exact location of the ACL's in the databases.
-    Create a POC tool to extract these ACL's for a given mailbox, or all mailboxes within an EDB.
Kevin Jonkers <jonkers=>fox-it.com>



86
F

Visualization of user activity on a computer.

Since the datasets are getting larger during an investigation, the need to go through these big data(sets) in an alternative way is growing. One method to learn more about your suspect is visualizing the suspects activity on a computer.  In order to be able to find the (user) anomalies you have to distinguish system activities from the user activities. The challenge is to visualize only the user activity by  zooming in on LNK, prefetch, specific eventlog entries and for example the internet history. Especially the last one is  more complex than it seems, since you have to find out which websites are visited directly by the user and which websites for example are auto-refreshed. Rob Lee did already a great job by developing a framework (Super timeline*) which is able of aggregating the most important events from a variety of sources, like the ones mentioned above. This tool can possibly be used as a basis for further research.

* http://computer-forensics.sans.org/blog/2011/12/07/digital-forensic-sifting-super-timeline-analysis-and-creation
Kevin Jonkers <jonkers=>fox-it.com>



87
F

MySQL record carving.

Carving for (parts of) deleted files is a very commong procedure in forensic investigations on computers. Carving retrieves the content of previously deleted but not yet overwritten files from a data carrier. This same procedure can be applied within database files to recover deleted or old versions of records and/or tables. Due to the structured nature of data storage in database files, carving for record structures has been proven to be a feasible process by Pooters et al in 2011 (http://sandbox.dfrws.org/2011/fox-it/DFRWS2011_results/Report/Sqlite_carving_extractAndroidData.pdf).

The objective of this assignment is to develop a carving methodology for recovery of database records that works for at least one storage engine used in MySQL. The following are the deliverables of this project:

-    A short literature study into data carving and MySQL storage format(s)
-    A description of the proposed carving method, supported data types, storage engine(s) and limitations of the method
-    A proof of concept implementation of the proposed method
Kevin Jonkers <jonkers=>fox-it.com>



93
F

Securing the last-mile of DNS.

The Domain Name System (DNS) is slowly being secured using DNSSEC, this technology allows a resolver to verify the authenticity of DNS answers from authoritative nameservers. However, DNSSEC does not provide end-to-end security, the resolver on the end-user’s machine still has to trust the resolver in the network (or verify signatures itself).

The second problem is that the DNS does not provide any form of confidentiality, queries and the data therein are transmitted in-the-clear. Several techniques exist to encrypt and authenticate the DNS data between hosts like TSIG and SIG(0). The most promising technology to provide confidentiality of DNS data between the end-user and the
resolver is the DNSCrypt from OpenDNS. This project uses DNSCurve to secure the connection between the client and the resolver. It supplies software for end-users that ships with the certificate of OpenDNS to verify the answers coming from the OpenDNS resolvers.

The goal of this research project is to define, and perhaps implement a mechanism that allows the end-user (stub-)resolver to securely retrieve information on its configured resolver to verify its identity. So the client knows that the it is talking to the correct resolver and the data sent to and from the resolver is protected from eavesdroppers.




home