http://uva.nl/

SNE Master Research Projects 2014 - 2015

2004-
2005
2005-
2006
2006-
2007
2007-
2008
2008-
2009
2009-
2010
2010-
2011
2011-
2012
2012-
2013
2013-
2014
2014-
2015
2015-
2016
2016-
2017
2017-
2018
2018-
2019
2019-
2020
2020-
2021
2021-
2022
Contact TimeLine Projects LeftOver Projects Presentations-rp1 Presentations-rp2 Objective Process Tips Project Proposal

Contact

Cees de Laat
tel: +31205257590
room: C.3.152
Course Codes:


Research Project 1 MSNRP1-6 53841REP6Y
Networking Research Project 2 MSN2NRP6 53842NRP6Y
Security Research Project 2 MSN2FRP6 53842SRP6Y

TimeLine

RP1 (January):
  • Wednesday Sep X 2014, XXh00: Introduction to the Research Projects.
  • Nov 26, 2014, 10h30-12h30: Detailed discussion on chosen subjects for RP1.
  • Monday Jan 5th - Friday Jan 30th 2015: Research Project 1.
  • Friday Jan 9th: (updated) research plan due.
  • Monday Jan 19, 16h00: possibility for students to discuss problems/progress in OS3 Lab.
  • Tuesday afternoon Feb 3th 2015 13h00-17h00: Presentations RP1 in B1.23 (OS3 lab) at Science Park.
  • Wednesday 4th 2015 10h00 - 17h00: Presentations RP1 in B1.23 (OS3 lab) at Science Park.
  • Sunday Feb 8th 24h00: RP1 - reports due
RP2 (June):
  • Wednesday April 22, 2015, 14h00-17h00, B1.23 Detailed discussion on chosen subjects for RP2.
  • Monday Jun 1th - Jun 26 2015: Research Project 2.
  • Friday Jun 5th: (updated) research plan due.
  • Monday Jun 15th, 16h00: possibility for students to discuss problems/progress in OS3 Lab.
  • Wednesday Jul 1th 2015, 9h50-16h10: presentations in H0.08 @ SP904.
  • Thursday Jul 2th 2015, 9h30-16h10: presentations in C0.05 @ SP904.
  • July 6th 09h00 2015: RP2 - reports due (preferably not much later as holidays interfere).

Projects

Here is a list of student projects. Find here the left over projects this year: LeftOvers.

In a futile attempt to prevent spam "@" is replaced by "=>" in the table. Color of cell background:
Currently chosen project. Blocked, not available.
Project plan received. Confidentiality was requested.
Presentation received. Report but no presentation
Report received. Outside normal rp timeframe
Completed project.
wordle-s.jpg


# title
summary
supervisor contact

students
R

P
1
/
2
7

Automated configuration of BGP on Edge Routers.

The configuration and management of the routing control plane is lacking support from versatile and easy-to-use open source toolsets. Currently, there are commercial tools available that do not fit well with operational workflows or do not integrate with public routing information resources like Internet Routing Registry (IRR) databases.

The IRR is a federated collection of routing policy databases, containing objects that describe routing policies which are used by various larger and smaller networks in the configuration of their routers. Broadly, the design objectives for the new IRR toolset will be a modular toolset, small for simple tasks, more heavy for complex tasks and designed for extensibility.

Project Result:
The outcome of this research project is a brief study of the related technologies and Internet standards that can be used for creating the new toolset. In addition, the students need to create a small working prototype which automates the BGP configuration based on their findings and conclusions. A small demonstration of the capabilities of this prototype would be much appreciated. As supervisors you can assign me and Benno. It is a challenge project as we are in the starting point of our work and possibly we can not answer all the questions, so the students should have a good level of independency.

Source code can be found here.
Stavros Konstantaras <stavros=>nlnetlabs.nl>
Benno Overeinder <benno=>nlnetlabs.nl>

Stella Vouteva <stella.vouteva=>os3.nl>
Tarcan Turgut <tturgut=>os3.nl>
R

P
2
9

Security Intelligence Data Mining.

In modern Security Operations it is becoming crucial to be able to mine information published on public sites, such as social networks and "pastebins", in order to generate security intelligence. However, the lack of natural language capabilities of most of the existing parsing engines creates difficulties processing the mined information, as capturing the sentiment is something difficult to implement. Moreover, most of the existing technologies work with occidental language support, being other interesting alphabets (Cyrillic, Chinese, etc) not frequently supported.
Your research will provide with a practical architecture to be able to mine security related information from public websites, criteria for the data mining, and the ability to perform false positive reductions based on language and context interpretation, with the ability to further scale and support non occidental alphabets. Given this setup will provide value when successfully implemented, technical feasibility as well as a founded financial case with detailed CAPEX and OPEX expected for the architecture is a desired outcome. As expected in any other data mining, the ability to deal with unstructured data as well as structured, data normalization and categorization, as well as storage requirements, is an expected outcome of your research.
Henri Hambartsumyan <HHambartsumyan=>deloitte.nl>
Nikolaos Triantafyllidis <Nikolaos.Triantafyllidis@os3.nl>
Diana Rusu <drusu=>os3.nl>
R

P
1
11

Discovery method for a DNSSEC validating stub resolver.

With current developments in DNSSEC and DANE, validation at the end-point (host) becomes an important feature for end-to-end authentication and encryption (DANE TLSA). A DNSSEC-enabled stub validator at the end-host has to figure out in which way the local resolver answers DNS queries and which DNSSEC related data is forwarded to the end-host.

The discovery method for the DNSSEC validating stub resolver needs to probe the (local) resolver in a specific way to determine which information is provided, and possibly decide to query the authoritative name server to enable DANE TLSA. The goal of the project is to develop a discover method for a stub resolver to decide in which way it deals with a local resolver, or will switch to full recursion to obtain necessary DNSSEC data.
Willem Toorop <willem=>nlnetlabs.nl>

Xavier Torrent Gorjón <xavier.torrentgorjon=>os3.nl>
R

P
2
12

Analysis of DNS Resolver Performance Measurements.

Currently, there are two main open source DNS recursive resolvers, namely BIND and Unbound. For various reasons, users, vendors, and (large) ISPs are interested in the (relative) performance of DNS recursive resolvers.

In this project, the goal is to design an generic and extendible framework for performance evaluation of DNS recursive resolvers. Besides the design, also the implementation and evaluation is part of the project.
Willem Toorop <willem=>nlnetlabs.nl>
Yuri Schaeffer <yuri=>nlnetlabs.nl>

Hamza Boulakhrif <hboulakhrif=>os3.nl>
R

P
2
14

Teleporting virtual machines.

Virtual machines (VM) are a key unit of cloud infrastructures. Often, VMs need to be moved or replicated between hosts. As hosts can be located anywhere in the world, the transport of a whole virtual machine can take significant amounts of time and data-transport capacity. A hypothesis is that time and transport efficiency could be gained by transporting only the file system and operating-system (OS) configuration parameters, and rebuilding the VM from a locally available copies of the OS and locally available copies of the installed applications.

In this project, you will investigate the hypothesis by implementing a VM teleporter as described above, as well as a baseline system that transports a VM in bulk. Your implementation should answer the following questions for at least the implemented configuration.
  • Is the required data transport-capacity indeed less for a teleported VM than for the baseline system?
  • Is a teleported VM indeed quicker up-and-running than for the baseline system?
"Deventer, M.O. (Oskar) van" <oskar.vandeventer=>tno.nl>

"Rengo, Carlo" <carlo.rengo=>os3.nl>
Harm Dermois <harm.dermois=>os3.nl>
R

P
1
16

Functional breakdown of decentralized social networks.

Microblogging is a type of service where short public text messages are published on the internet, and semi-private messages can be exchanged between internet users through an intermediary. Popular services include market leaders Weibo, Twitter.com as well as corporate solutions like Yammer. Many of these are centralised commercial services very limited in scope by their business model. The services are increasingly controversial because of the closed nature of the services, their volatility in API's for developers (not based on published standards, sometimes crossing the line of platform provider and competing directly with external developers), the lack of openness for external developers and the fact that in many cases privacy-infringing user data from both users and their followers is being sold and/or exploited in the background. Typically it is not possible to communicate outside of the closed network.

Decentralised federated microblogging solutions like Pump.io, Status.net, Buddycloud and Friendica hold significant promise to improve the situation, especially when these free and libre solutions become part of the standard hosting/access package of internet users. If we can make the technology automatically available to every internet user through their ISP's and/or hosting providers, adopting the same user credentials they use with email, it would allow for automatic discovery across the internet and zero configuration integration with existing tools (e.g. mail clients, instant messaging software) as well as 'identity ownership' for the end user. This opens the possibility of being able to automatically 'follow' users of any email address (provided they belong to the 23% of users that want this), allow closed user groups, hidden service discovery and serious user-determined cryptography.

The research project looks into the various (open source) technologies which are available, and makes recommendations for inclusion into the project. What are the most mature solutions, in features and in implementation quality? To what extent are upcoming standards such as OStatus (sufficiently) supported? What important features are not yet standardised? What are the performance characteristics of the current federated microblogging solutions? What could be good, horizontally scalable deployment strategies?
Michiel Leenaars <michiel=>nlnet.nl>

Wouter Miltenburg <wouter.miltenburg@os3.nl>
R

P
2
17

Migration models for hosting companies.

In this project you will look at the typical setup of different classes of hosting companies.
  • What is their technical architecture?
  • How is responsibility for maintenance delegated, and what are the biggest maintainance costs?
  • What are their business requirements for an upgrade of the software part of their technical infrastructure?
Given that their server racks will be underpovisioned and oversubscribed, can we devise any models to migrate such a business with minimal extra dependencies? For instance a cloud supported migration model where some or all services are temporarily moved to PaaS providers. How would such a model look, and how can we successfully demonstrate that such an approach is feasible?
Michiel Leenaars <michiel=>nlnet.nl>

Xander Lammertink <xander.lammertink=>os3.nl>
R
P
1
20

Network utilization with SDN in on-demand application-specific networks.

One of the biggest problems in computer networks is the lack of flexibility to support innovation. It is widely accepted that new network architectures are required. Given the success of cloud computing in the IT industry, the network industry is now driving the development of software-based networks. Software-based networks allow deployment and management of network architectures and services through resource virtualization. Ultimately, a program can describe the blueprint of the software-based network, i.e. its deployment, configuration, and behavioral aspects.

At TNO/UvA, we created a prototype of an Internet factory, which enables us to produce networks on-demand at many locations around the globe. In this work, we will develop a program using our prototype that produces Openflow networks (using OpenVSwitch, Daylight, and Floodlight). We will produce a number of interesting networks, e.g. one that finds better paths than Internet routing, one that is robust on failures of network elements, and one that offers larger capacity by combining multiple paths. Is is possible to capture years of experience and best practices in network design, deployment, and operations into a compiler?

http://youtube.com/user/ciosresearch
Marc Makkes <M.X.Makkes=>uva.nl>

Ioannis Grafis <Ioannis.Grafis=>os3.nl>
R
P
2
24

DANE verification test suite.

Most encrypted forms of communication on the Internet nowadays use Transport Layer Security (TLS). TLS is used as a means to validate the server’s certificates to which the clients connect to. In order to validate these certificates, the Internet relies on trusted third parties called Certificate Authorities (CAs) that sign these certificates. The client then checks if the certificate, which it is presented by the server, is indeed a valid certificate by the CA it has been signed with.
DNS-Based Authentication of Named Entities (DANE) is a new standard by the Internet Engineering Task Force (IETF). It is used as an improvement to validate the aforementioned secure servers by validating a server’s certificate, which is now stored as a TLSA resource record (TLSA RR) in the Domain Name System (DNS) zone of that domain. If these zones are part of the DNS Security Extensions (DNSSEC) chain of trust, the validity of these TLSA RRs can be verified as the records are signed by trusted keys within the DNSSEC chain of trust.

This project aims to provide a test-suite to test the current DANE-tools against the DANE specification as per Request for Comments (RFC) 6698.1

[1] http://tools.ietf.org/html/rfc6698
Michiel Leenaars <michiel=>nlnet.nl>

Guido Kroon <guido.kroon=>os3.nl>
Hamza Boulakhrif <hamza.boulakhrif=>os3.nl>
R
P
1
33

Zero-effort Monitoring Support.

Monitoring of systems is vital to pro-active troubleshooting - that is, to fix a problem before a customer detects it. Setting up monitoring is usually manual work, but with a bit of infrastructure, this can be automatically done when packages are installed!

Gift-Wrapped Opportunities

Modern Linux systems all use packages of some sort. Common server packages include scripts with a standardised start/stop interface, a method to indicate whether these scripts should be started at system boot, and PID files to check where a daemon should be running. Linux in turn, provides us with an incredible amount of information based on PIDs, including whether a process is running, its %cpu and network behaviour. Wow, lots of information to monitor.

The one link that is missing, is to relay this information over monitoring protocols. There are many of those, but only one is a true Internet Standard, namely SNMP. A protocol that identifies variables in a strictly defined (ramework where everyhing has a unique ((sub-)sub-)number, through which it can be queried. Numbers exist that lend themselves for the purpose of this exercise.

Assignment

In this assignment, we are asking you to support two systems to prove the ideas, focussing on Debian and RedHat for now. You will therefore do some programming. We ask you to intervene in system control in a generic matter. We have selected the upcoming systemd service as our focus. This is a modern replacement of the init daemon in process #1, which has an overview of the services that should be running on a system, and whether it actually is running. Find a way to integrate monitoring into this system; ideally this would be part of the systemd daemon, but to demonstrate the power of this approach the scope of this exercise is limited to polling and querying the daemon. As part of the assignment, you will explain how your approach can be packaged as an add-on or drop-in replacement so that a distribution could benefit from your work. You will also keep a keen eye on any constraints on software that is being controlled by systemd; if any assumptions or requirements crop up you will document those as part of your packaging description. From systemd, you should be able to retrieve an up-to-date list, containing daemons that ought to run. The list would also mention for each daemon whether it should run at system boot, and this information is also kept up-to-date. Finally, the daemon will tell you whether a daemon is running at that time and so you have all the information to know whether your system is running well -- that is, if everything that ought to run is indeed running, and whether this will also be true after the next reboot. Now, make the information available from systemd through an AgentX sub-agent (RFC 2741). This evades most of the SNMP protocol details, which can be handled in a "master agent" such as snmpd from the Net-SNMP package. Your sub-agent implements network services monitoring (RFC 2788) and will respond to inquiries to lookup the current status information of a daemon. In addition, it will detect changes in service status, and report these pro-actively through a so-called "SNMP trap". The rest is easy. Any monitoring application (we might use Zabbix or Nagios in this assignment) can monitor SNMP. Moreover, it can be setup with templates that iterate over your tables. This yields the system's own idea of what should run, and whether it is indeed running. The exceptions (stopped even though it should boot automatically, and started while it is not in the autoboot sequence) can be put to good use to display monitoring information. Finally, traffic dumps can be exploited for drawing diagrams per service.

Research Questions

  • We are curious if the approach that we perceive is practical. We target both Debian and Redhat because they are sufficiently different to make your answer somewhat broadly ranging. We believe that standardisation and automation of monitoring can greatly aid its usefulness.
  • Please investigate experiences and reports of administrators, and evaluate the new system based on that. Do not forget to keep your own eyes open either -- what problems are you encountering, can they be remedied or are they show-stoppers? And, is there a need (and a possibility) to monitor more aspects without getting into application-specifics?"
See also: http://research.arpa2.org/

The ARPA2 project is the development branch of InternetWide but it also engages in software and protocol research. This is a logical consequence of our quest for modern infrastructure for the Internet.

Code produced: https://github.com/arpa2/sytemd-snmp-zeroconf
and: zip-file
Rick van Rein <rick=>openfortress.nl>

Julien Nyczak <jnyczak=>os3.nl>
R
P
2
38

Monitoring DNSSEC.

DNSSEC is quickly becoming a vital cornerstone for internet security. Its proper functioning involves regular zone re-signing, to avoid expiration of signatures. This is a very useful thing to monitor. We ask you to develop a monitoring system for DNSSEC.

What DNSSEC does

The use of DNSSEC is that it signs DNS information. A chain of signatures reaches all the way down from the root nameservers, whose public key is "widely distributed and well-known". To take part in this scheme, zones that roll out DNSSEC sign up with their parent zone for a "secure delegation". When delegation is secure, the zone will go offline if anything about its DNSSEC is behaving badly. For example, when the short-lived signatures on DNS records expire without being refreshed in time.

This clearly calls for monitoring. A solution like OpenDNSSEC will make sure to re-sign zone data some time before a signature expires; if this is getting late, an alert from a monitoring system can help to take measures in time.

Construct a MIB for SNMP

There are many monitoring sytems, and they all do pretty much the same; the only offical standard is SNMP, which is indeed compatible with all those other systems. SNMP has a ((…sub-)sub-)numbering scheme for all sorts of "objects" that can be monitored, and assigns meanings to those numbers through a so-called Management Information Base schema. It is common practice to define tables indexed by certain data, and to retrieve entire tables or indexed rows.

We are asking you to construct a MIB that delivers all zones that are being signed, plus vital information about their well-being. Candidates of this vital information would be the oldest signature in a zone, the signature on the SOA record, and the SOA record itself. Please consider these as well as other warning signs that may be valuable to the proper functioning of DNSSEC. Many tools have been constructed, but what we are looking for is an integrated SNMP-compliant specification for the well-being of a zone’s DNSSEC. For inspiration, you can read https://wiki.opendnssec.org/display/DOCS/kasp.xml

We suggest that you list a table, indexed by zone names, and presenting what you derived as being their relevant state. The MIB should in fact be general enough to list unsigned zones, whose SOA may be interesting but whose DNSSEC status would of course be recognisably different from properly signed zones.

Build and Demonstrate a Proof of Concept

We ask you to implement the MIB in a proof-of-concept implementation, perhaps in Python. The implementation can be run as an AgentX sub-agent (RFC 2741), which takes care of most of SNMP’s protocol frivolities. You would simply respond to inquiries, including searches for zones and table entries. In addition, alert states could be detected through regular local polling, and cause an "SNMP Trap" to be sent to a monitoring station.

There are various architectural approaches to monitoring DNSSEC. Investigate options; to name two, you could inspect configuration files of a tool like OpenDNSSEC or ZKT or you could extract information from DNS authoritatives. Discuss your preference and selection for implementation.

Install the Zabbix monitoring system, and configure it to retrieve the tables over SNMP, and demonstrate that they give early warnings. Define and demonstrate a template that the system can use to automatically discover your tables by searching through the number range of your MIB, perhaps from a template.

Aim for as little effort as possible for dealing with additional domains, the ideal being full automation. Even if your monitoring system does not support it (yet), you want to take care of a smarter one by supporting automatic discovery based on your MIB.

Research Questions:

  • What are vital life signs for monitoring DNSSEC?
  • What are vital ilfe signs for monitoring DNS?
  • How to construct a MIB for DNSSEC?
  • How to conduct monitoring based on such a MIB?
  • How do architectures for monitoring DNSSEC compare?
See also: http://research.arpa2.org/

The ARPA2 project is the development branch of InternetWide but it also engages in software and protocol research. This is a logical consequence of our quest for modern infrastructure for the Internet.
Rick van Rein <rick=>openfortress.nl>

Martin Leucht <martin.leucht@os3.nl>
Julien Nyczak <julien.nyczak=>os3.nl>
R

P
1
41

Protecting against relay attacks forging increased distance reports.

As multiple studies have shown (e.g. on contactless payments and on keyless entries for cars (http://s3.eurecom.fr/docs/ndss11_francillon.pdf)), relaying attacks can be quite dangerous and hard to fight. Some ideas have been proposed (proximity, timing and spotting mobile signals), but how effective are these? Students can research the possibilities and counter-measures, both preventive and detective.
Martijn Sprengers <sprengers.martijn=>kpmg.nl>
"van Iterson, Paul" <vanIterson.Paul=>kpmg.nl>
"van den Breekel, Jordi" <vandenBreekel.Jordi=>kpmg.nl>

Xavier Torrent Gorjón <Xavier.TorrentGorjon=>os3.nl>
R

P
1
42

Extremely Secure Communication.

In this research, a scenario to enable remote work teams and individuals to communicate and share data securely will be proposed and developed. Examples where this would be useful, are an oil company finding a new source of oil, or a company that is secretly taking over another company and doesn’t want to influence the market value. In order to tackle the issue of possible backdoors, an open-source, verifiable, operating system will be used and a true random number generator (TRNG)[4], also known as hardware random number generator (HRNG), will be implemented. To reduce the possibility of having a link tapped, where the sensitive data is transmitted, all data will not only be encrypted, but will also travel over a multipoint virtual private network (VPN)[5]. A multipoint VPN will ensure that all traffic is isolated and no single tap point will exist, as individual VPN tunnels will be created between all participants.
  • How can we setup extreme secure communication around the globe for project teams?
  • Take for example large multinationals: how can they work around the globe on an extremely confidential and sensitive topic?
  • Can they trust any type of hardware (laptop, phone, printer), software (OS, phone OS, printer OS) after the Snowden revelations?
  • Is blackphone (https://www.blackphone.ch/) really a secure solution for telephony?
  • How about Tails (https://tails.boum.org/) and initiatives for securely designed hardware components (http://arstechnica.com/security/2013/12/we-cannot-trust-intel-and-vias-chip-based-crypto-freebsd-developers-say/)?
NB: This project can be supported by Merel Koning from the RU Nijmegen, as she is a real fan of this topic.
Martijn Sprengers <sprengers.martijn=>kpmg.nl>
Ruud Verbij <Verbij.Ruud=>kpmg.nl>
Jarno Roos <Roos.Jarno@kpmg.nl>

Daniel Romão <daniel.romao=>os3.nl>
R

P
2
45

Container Networking Solutions.

More and more traditional visualization is being phased out for container solutions (i.e. docker, rkt, etc). These containers can be interconnected using a network solution, varying from traditional vlans to overlay networks, and several others in-between. Since this technology is rapidly evolving, there is little to no up-to-date information available on how these perform in different implementations.

In this project we set up a testing environment in which different container setups can be deployed, using various network solutions. The results will be compiled into a reference (open, accessible to anyone) for people looking for a good network solution for their specific deployment.
Paola Grosso <p.grosso=>uva.nl>

Joris Claassen <joris.claassen=>os3.nl>
R

P
2
48

Proving the wild jungle jump.

The project goal of the project is to determine if a wild jungle jump is a feasible attack vector or not. A wild jungle jump is a situation where the CPU's program counter (PC) is corrupted in such a fashion that it points to an attacker controlled address. At Riscure we are able to inject fault using clock, power, electromagnetic and optical fault injection. We prefer to use power fault injection in order to keep the complexity at a minimum. All hardware and software to perform power fault injection will be provided by us. You will be responsible for creating a test bench and all the necessary software to determine the likelihood of the wild jungle jump. A virtual machine, including all the necessary tools to build a firmware image will be provided by us. The target is a Wandboard (http://www.wandboard.org/) which uses a Freescale i.MX6 Solo processor.
Niek Timmers <Timmers=>riscure.com>
James Gratchoff <jgratchoff=>os3.nl>
R

P
2
49

Feasibility and Deployment of BadUSB.

Many methods have been developed to penetrate a network as a hacker or a pen-tester. Social engineering is a part of the many possible courses an attacker could take to get inside a system. The goal is to get access to a computer connected to the target network and use it as an entrance point of the attack. One of the practical strategies to achieve this is to plug in a USB stick to a available machine. This can be done by using a bad USB detected by a victim’s computer as a Human Interface Device (instead of a mass storage device, it is recognized as a keyboard) and running code without the knowledge or permission of the user, for example if he is away for lunch.
The purpose of this project is to make a proof of concept for badUSB with the use of an Arduino as a replacement for a USB. The project is setup to test the effectiveness of the following badUSB features:
  • Running scripts that extract network, anti-virus information, user accounts list, etc.
  • Installation of a root certificate on Windows machines.
  • Installing and running a networkscanner, collecting the results, sending them over the Internet and storing them on a remote server.
  • Getting remote access to the machine for later use.
Martijn Sprengers <sprengers.martijn=>kpmg.nl>

Stella Vouteva <svouteva=>os3.nl>
R

P
1
50

Circumventing Forensic Live-Acuiqisition Tools On Linux.

This project will encompass the development of a rootkit for Ubuntu that prevents live acquisition of data stored on the hard-drive and, by doing so, create an Anti-Forensic Ubuntu distribution (Afubuntu?). Live-acquisitions are still a necessity for systems which utilize full disk encryption, and are performed by initiating a block-by-block copy of the data stored on the internal (in use) hard-drive. This rootkit will try to fool such block-copy tools (like DD), by presenting it with random (or maybe even structured) data. Other possibilities could encompass presenting a complete clean system image to the tool, which would decrease the chance of detection. A proof-of-concept will be developed that works against the GNU coreutil DD-tool to proof that such a technique is feasible and could be used in real-life situations to hide evidence.
Arno Bakker <Arno.Bakker=>os3.nl>
Jaap van Ginkel <Jaap.vanGinkel=>uva.nl>

Yonne de Bruijn <yonne.debruijn=>os3.nl>
R

P
2
52

Evaluating the security of the KlikAanKlikUit Internet Control Station 1000.

Home automation products [2] are becoming more popular everyday. It is cheap and simple to install, and usage is very versatile. With the ’Internet of Things’ (IoT) era, home automation has made a step upwards. Home devices can be controlled from the Internet, often by using a smart-phone app. The seam side of this development is that home appliances become exposed to the Internet and thus, to the whole world. This research focuses on the communication regarding the Internet Control Station 1000 [1]from the KlikAanKlikUit brand. This device acts as a gateway between the home automation appliances and the Internet. This research is done to find security issues within the ICS-1000 device. Although the experiments are conducted on the ICS-1000 device, it may well be possible that some the results and recommendations can be applied to home automation Internet gateways in general.

[1] Internet Control Station 1000. Retrieved from: http://www.klikaanklikuit.nl/shop/nl/producten-1/zenders/ics-1001/ Accessed: 5 januari 2015
[2] H ome Automation. Retrieved from: http://en.wikipedia.org/wiki/Home automation Accessed: 6 january 2015
Martijn Sprengers <sprengers.martijn=>kpmg.nl>

Roland Zegers <rzegers=>os3.nl>
R

P
1
57

Known plaintext attacks on encrypted ZIP files.

Encrypted ZIP files are often used to obfuscate firmware, thus preventing users to develop their own alternative code for their hardware. For encrypted ZIP files Biham and Kocher already proved in 1995 that it is possible to do a KPA. However, currently there is not an open source licensed version of this attack available. Although source code is available for one tool, the lack of open source tools makes it difficult for some to study, share and improve the program.

The research project is comprised of two parts: the first is to investigate which algorithms are most suited for this attack, the second task is to make a tool that reimplements the algorithm, without looking at the source code of other tools, and release it under an open source license.

Armijn Hemel - Tjaldur Software Governance Solutions
Armijn Hemel <armijn=>tjaldur.nl>

vertical-align: top;os Barosan <dbarosan=>os3.nl>
R

P
1
60

Pre-boot RAM acquisition and compression.

During forensic investigations it is common practice to retrieve the content of the memory of a running system. This is the place where advanced malware hides, suspects leave digital traces regarding their recent computer usage and also contains the crypto keys for the increasingly common full disk encryption. While several techniques (like DMA and (cold) boot attacks) are available for this, none of them work in all possible scenarios and in most cases only a single attempt to retrieve the data will be possible.

a) RAM acquisition and compression

Cold boot tools like msramdump are written in times that 4G of RAM was rare. The student is asked to investigate the possibilities of a pre-boot memory compression stage to the linux kernel. This compression stage should compress the current content of RAM into the upper regions of RAM hereby freeing lower regions of RAM to boot a (small) forensic linux operating system. This forensic OS should be able to acquire the compressed RAM on disk or network destination.

b) RAM acquisition and compression (a) or transmission (b)

Cold boot tools like msramdump are written in times that 4G of RAM was rare. The student is asked to investigate the possibilities of building a 64bit equivalent of msramdump and add (bios-based) networking support for usage in a PXE environment.
"Ruud Schramp (DT)" <schramp=>holmes.nl>
"Zeno Geradts (DT)" <zeno=>holmes.nl>
"Erwin van Eijk (DT)" <eijk=>holmes.nl>

Martijn Bogaard <martijn.bogaard=>os3.nl>
R

P
2
63

Graph500 in the public cloud.

Graph500 (graph500.org) is a ranking list of the fastest graph processing platforms world wide. For example, DAS4 at the VU University is nowadays in top 100.

To add new entries in this ranking list, one needs to tune and measure the performance of two kernels in a breadth-first search algorithm on the newly proposed platform. The input graphs are also generated using a standard procedure, and the largest supported graph needs to be reported together with the performance numbers as well.

In this project, we propose to tune the Graph500 benchmark to run on a cloud, and define a performance model that allows scaling up the dimensions of the graph with the number of machines to be rented and the length of the reservation.

The following deliverables are requested from the student:
  1. a running version of the Graph500 benchmark on the cloud
  2. a performance report for a well-chosen sized graph and the corresponding resource usage
  3. a performance model to correlate the graph size with the needed cloud resources for a top 20 performance mark.
Ana Varbanescu <a.l.varbanescu=>uva.nl>

Harm Dermois <harm.dermois=>os3.nl>
R

P
2
66

Project for hiding and detecting stego-data in a videostream.

Goal :
  • In large amounts of datastreams in literature it is stated that data can be hidden, and several methods are presented for example in http://stegano.net/tools.html.
Approach :
  • One of the methods for steganography in video (for example openpuff) can be selected, and used to make a ip video stream
Result :
  • Which methods are available for (real time) detection and how can you prevent detection as such ?
Zeno Geradts (DT) <zeno=>holmes.nl>

Alexandre Miguel Ferreira <Alexandre.MiguelFerreira=>os3.nl>
R

P
2
67

Improving drive-by download detection: "visit^n. process. analyse. report."

In a world where nearly every high traffic website has an advertisement provider who buys/sells/trades ads on a dark place on the internet, a malware infection through a drive-by download is not uncommon. Detecting these malicious advertisements is key in preventing malware infections on a large scale.

To detect drive-by downloads one could use a virtual machine to visit a list of websites and look for suspicious changes. However, this does not scale if the list of websites to check increases. Visiting one url at a time with one VM sequentially is very inefficient.

Is it possible to improve this process by visiting multiple sites at a time during one session and still be able to determine which website was responsible for serving a malicious advertisement?

Keep in mind:
  • Not every visitor receives the same advertisement
  • One infection (try) per IP is very common
  • The malicious ad is only severed in a small timeframe
Requirements (intake @NCSC-NL):
  • Python leetness is required for a PoC.
  • Cuckoo experience is a plus.
This rp is best done by a team of two students.
Jop van der Lelie <Jop.vanderLelie=>ncsc.nl>
Wouter Katz <wouter.katz=>ncsc.nl>

Adriaan Dens <adriaan.dens=>os3.nl>
Martijn Bogaard <martijn.bogaard=>os3.nl>
R

P
1
73

Securing the SDN Northbound Interface with the aid of Anomaly Detection.

Software-defined networks (SDN) are an interesting research topic, and they are starting to be used by commercial parties. As an, until recently, only academic topic, security has only been an afterthought. This research will investigate the security of the Northbound API, and its (in)security. The current state of the Northbound API of popular SDN controllers will be assessed, and where needed, ideas for improvement suggested. This can then be generalised to see how a secure Northbound interface should look like.
Haiyun Xu <h.xu=>sig.eu>

Jan Laan <Jan.Laan=>os3.nl>
R

P
2
74

Trusted Network Initiative to combat DDos attacks.

Distributed Denial of Service (DDoS) attacks are becoming an increasingly bigger problem. They are increasing in frequency as well as in numbers. The distributed nature also makes it a lot harder to defend against, since there is not a single source. One solution is to force everyone to implement BCP38, which would prevent source address spoofing. Unfortunately we do not live in an ideal world where this is possible.

A new idea has been launched by NLNet and the Hague Security Delta, called the "Trusted Network Initiative". This idea is described using the metaphor of a castle with a moat and drawbridge. This would then close off the Dutch Internet to only the trusted networks.

In this research project you will be evaluating this idea, work with a test network, and/or network simulations to evaluate the effectiveness of this solution.

Sources:
Marc Gauw <marc.gauw=>nlnet.nl>

Jeroen van Kessel <Jeroen.vanKessel=>os3.nl>
Alexandros Stavroulakis <Alexandros.Stavroulakis=>os3.nl>
R

P
1
75 Peeling the Google Public DNS onion.

Description:

In late 2009, Google announced that they would start offering a public DNS service called "Google Public DNS" [1]. Since it was launched, Google Public DNS has gained popularity quickly and Google now claims that they are the largest public DNS service on the Internet.

There has been a lot of debate about this service with some people claiming that this is yet another indication that Google wants to take over the internet (e.g. [2]). This assignment is not about the desirability and (perceived) evilness of this service (although you are entitled to an opinion about it, of course ;-) ). The goal of this assignment is to learn more about how Google DNS performs and works internally.

On the pages describing the service, Google makes all sorts of claims about performance [3] and security [4]. While these claims give hints of how the service is built up, nobody outside Google actually knows how the service works.

The idea is to use RIPE Atlas probes to learn more about Google DNS. Atlas probes are geographically distributed, making them ideal vantage points to study a big distributed system like Google DNS. By setting up your on domain for which you control the authoritative name server and by cleverly using the probes to perform DNS queries it should be possible to learn about:
  • the performance of Google DNS in different geographic regions
  • cache coherency worldwide (is there a single shared cache for the whole service or do queries from different locations result in multiple queries to authoritative name servers)?
  • do queries to an authoritative name server originate in the region of the original query to Google DNS or are they local to the authoritative name server
  • ... (use your imagination, there's lots more we can learn)

Skills/knowledge:

This assignment is best suited for two students working together, as it can be quite a bit of work.

To perform this assignment in the limited time you have, it helps if you have the following skills/knowledge (in descending order of importance):
  • Ability to configure and run a DNS server
  • Good working knowledge of a scripting language to analyse log files
  • Familiarity with statistical analysis of data sets
  • Familiarity with RIPE Atlas probes

Location:

You are expected to perform the assignment in SURFnet's offices, which are centrally located right next to Utrecht Central Station.
  1. https://developers.google.com/speed/public-dns/
  2. http://www.wired.com/2009/12/geez-google-wants-to-take-over-dns-too/
  3. https://developers.google.com/speed/public-dns/docs/performance
  4. https://developers.google.com/speed/public-dns/docs/security
  5. https://atlas.ripe.net
Roland van Rijswijk - Deij <Roland.vanRijswijk=>surfnet.nl>

Ardho Rohprimardho <Ardho.Rohprimardho=>os3.nl>
Tarcan Turgut <tarcan.Turgut=>os3.nl>
R

P
1
76

Security automation and optimization using HP-NA.

HPNA is an HP proprietary software allowing a much easier management of a big network by permitting management centralization, keeping track of the modifications of the configurations and who made these changes. It standardizes the configurations using scripts. In addition, its policies can test the configuration's compliances within the organization's network.

There will be several goals in this research project.
  1. The first and biggest one aims to test the ability to gather new CVE vulnerabilities using the "HP Network Automation (HPNA)" software. These vulnerabilities are then to be filtered and chosen depending on the specific equipment and requirements of the company. The found problems along with their solutions have to be automatically passed on to the network administrators so that they can be manually addressed. By doing so and if the administrators fix the given problems. It will help an organization to efficiently improve their security regarding known flaws for their specific equipment. The outcome of this method will finally describe how more secure a company could be using it, by defining how better it could performs in a penetration testing environment at a vulnerability scan level.
  2. A second goal would be the automation of the checking of configuration points dedicated to security such as ACLs and authentication configurations. Indeed, it is needed to automatically know if all the equipments are correctly configured for their security and that they behave the same way (one cannot check several hundreds of devices manually). As an example, look for weaknesses in SSH keys depending on the key size and key generation. This has to be checked using HPNA because they could be security flaw.
  3. A final and small goal will be to change the default authentication certificate of HPNA, because every purchased HPNA has the exact same certificate. An intruder would then be able to easily spoof the identity of the HPNA and have access to a big part of the company's network.
Olivier Willm <olwillm=>airfrance.fr>

Florian Ecard <fecard=>os3.nl>
R

P
1
77

CoinShuffle anonymity in the Block chain.

Bitcoins have recently become an increasingly popular cryptocurrency through which users trade electronically in a peer-to-peer fashion and more anonymously than via traditional electronic transfers. Bitcoin's design keeps all transactions identified only by cryptographic public key identifiers, which can be seen as pseudonyms. This leads to a common misconception that Bitcoins and other cryptocurrencies inherently provide anonymous use.

So-called mixing services are designed to mix one's funds with others', with the intention to obfuscate the money trails. The hypothesis is that mixing services can further improve the anonymity of cryptocurrency transactions.

In this project, you will investigate the hypothesis by
  • Building a 'ScholarCoin' ecosystem, that is, setting up your own cryptocurrency and wallets, and mining an initial set of ScolarCoins, using Bitcoin tools
  • Designing and implementing a mixing service for ScolarCoins
  • Performing a set of transactions with your new ScolarCoins and anonymizing these transactions by using your mixing service
  • Tracing the performed transactions and mixing operation, and demonstrating plausible deniability
Roberta Piscitelli <roberta.piscitelli=>tno.nl>
Oskar van Deventer <oskar.vandeventer=>tno.nl>

Jan-Willem Selij <Jan-Willem.Selij=>os3.nl>
R

P
2
80

Preventing Common Attacks on Critical Infrastructure.

The Trusted Networks Initiative is an idea of The Hague Security Delta [1] to mitigate security incidents by using ’trusted networks’ on the Internet. These ’trusted networks’ are separated from the Internet and can be disconnected as a last-resort measurement during emergencies. During such an emergency, where the ’trusted networks’ are disconnected from the Internet, the connectivity remains between the participating nodes of the ’trusted networks’. The ’trusted networks’ were originally meant for networks that are categorised as critical infrastructure. Nowadays, everyone can become part of a ’trusted network’ when they meet the policy requirements [2].
The original authors of this paper do not fully agree with the original idea and approach of the Trusted Network Initiative. They think the current Internet, as people nowadays make use of it, should be improved. However, creating a separate network, something like Internet 2.0, does not improve the Internet that is used nowadays. Therefore, the research team will look into the most common attacks that larger companies face and how they can be or could be mitigated with today’s technologies. The Dutch Internet Service Provider KPN is also interested in this research and is therefore involved in this project.

The research is formed around the main question shown below.
  • Which techniques are available today that could be used to mitigate the most common attacks?
As a result of the main question, shown above, there are the following sub questions:
  • What kind of problems does the Trusted Networks Initiative try to solve?
  • What kind of attacks are critical infrastructures suffering from?
  • What kind of techniques can be used to harness yourself or your company (e.g. ingress/egress filtering, trusted routing, etc.) to protect against certain attacks that the critical infrastructures suffer from?
  • If there are techniques available, why are they not used in common practice?
    • Which step(s) could assist adopting these techniques?
[1] The Hague Security Delta, Project Trusted Networks Initiative, January 2015.
  https://www.thehaguesecuritydelta.com/projects/project/60
[2] The Hague Security Delta, Trusted Networks Policy, 24 November 2014.
  https://www.thehaguesecuritydelta.com/images/20141124_Trusted_Networks_ Policy_beta-vs0_7.pdf
Oscar Koeroo <Oscar.Koeroo=>kpn.com>

Koen Veelenturf <koen.veelenturf=>os3.nl>
Wouter Miltenburg <wouter.miltenburg=>os3.nl>
R

P
1
81

Evaluation of the security of drone communication.

Drones are being increasingly used throughout society. More and more different manufacturers are hopping on the drone-wagon. Is there any attention paid to security regarding the communication with these devices? Is there some form of encryption applied to the control up-link, or is everything out in the open? The main purpose of this research will be to develop (if possible) a method to jam, interfere or hijack a drone.
Martijn Sprengers <sprengers.martijn=>kpmg.nl>

Yonne de Bruijn <ybruijn=>os3.nl>
James Gratchoff <james.gratchoff=>os3.nl>
R

P
1
82

Crawling the USENET for DMCA.

USENET has been around sins the early years of the Internet. A place for discussions, questions and news articles. The use of a client server model gives the reader access to huge amount of information at high speed. These characteristics have made USENET very popular in the pirating scene. In a attempt to combat the loss of income to the copyright holders. Privately funded organisations have been hired to start sending removal requests. The USENET providers have been swamped under the mountain of automated requests and have given the copyright holders direct access in order to com- ply with local law. This allows the copyright holders to remove content without oversight and completely automatically.
During the research period we want to device a method of creating an reliable index of the original state of the USENET network. And create a user friendly interface to display which files have been removed from which provider.
The research question for this project can be split in three sub-questions:

  1. Can a comprehensive database including DCMA takedowns be created?
  2. What a methods exist to keep artical availability upto date?
  3. Is it feasible to keep the entire USENET article availability up to date?
Niels Sijm <niels.sijm=>os3.nl>
Arno Bakker <arno.bakker=>os3.nl>

Eddie Bijnen <Eddie.Bijnen=>os3.nl>
R

P
2
83

Improving the Performance of IPOP.

Abstract: The aim of this project is to improve the performance of IPOP, an IP-over-P2P overlay in cloud environments. IPOP is used within Kangaroo [1], a nested cloud infrastructure, for providing connectivity between nested VMs hosted on VMs started over different clouds. Currently, IPOP uses libjingle [2] for providing NAT traversal and encryption for IP traffic, but these services are redundant and add unnecessary overhead within e.g. Amazon EC2 where VMs have direct connectivity via an internal virtual network and the traffic is protected via tenant-specific virtual networks.

As part of this project, the students should add support to IPOP for detecting the situations where it is possible to bypass libjingle for high-performance packet transfers. In these situations, the IPOP endpoints should be automatically configured to forward packets captured from tap interfaces to other endpoints through a custom, low-overhead forwarding layer - without going through libjingle links. Additionally, the project can investigate the use of high-perfomance user-level networking techniques such as netmap [3] or netfilter PF_RINGs [4] for achieving 10Gbps or above.

This project is part of an international collaboration between University of Florida, VU Amsterdam, and University of Rennes. The students will have a chance to work closely with researchers from these universities.

[1] http://www.cs.vu.nl/~kaveh/pubs/pdf/ic2e15.pdf
[2] http://code.google.com/p/libjingle
[3] http://info.iet.unipi.it/~luigi/netmap/
[4] http://www.ntop.org/pf_ring/pf_ring-and-transparent-mode
Renato Figueiredo <renato=>acis.ufl.edu>
Kaveh Razavi <kaveh=>cs.vu.nl>
Ana Oprescu <a.m.oprescu=>vu.nl>

Dragos Barosan <Dragos.Barosan=>os3.nl>
R

P
2
84

Remote data acquisition of IskraME372 GPRS smart meter.

Back in 2009, the Dutch government backed off plans for a mandatory smart meter deployment plan after a majority of Parliament members were convinced that the forced installation of smart meters would violate the consumers right to privacy and would be in breach of the European Convention of Human Rights. It was determined that granular data collection by smart meters would leak information regarding the habits and living patterns of consumers. In addition, there was also a risk of such information falling into the hands of a third party for nefarious purposes[1]. The possibility of this risk is increased by two reasons:
  1. GSM has already been proven to be quite insecure. The cryptographic algorithms in- volved in GSM have been broken. Furthermore, the absence of network authentication makes it possible for an attacker to take full control of the Data of a victim’s GSM device by setting up a rogue Base Transceiver Station (BTS) and forcing the devices to connect to it. It has to be underlined that the attack with a rogue BTS does not rely on any weakness in the cryptographic algorithms used in GSM, so it has always been possible, even before the cryptographic algorithms were broken. In addition, General Packet Radio Service (GPRS) the data communication protocol added to GSM after its initial deployment, have all along been suspected of being just as insecure as GSM itself.
  2. The second reason which increases the risk is that in recent times this attack has gained a lot of attention because of its extremely low price to implement (900E).
In this project, we are investigating whether these suspicions are right on the mark. What is about to be shown is that an attacker, with a budget equal to 900E, can set up a rogue BTS, make the victim’s GSM Smart meter device connect to such BTS, and gain full control over its data communications. Specifically, a recent GPRS capable smart meter with electricity and gas sensors will be used in order to identify the way, the type and the fre- quency of data being sent. Finally a Man In the Middle (MiTM) attack would also be part of the agenda.

The main research question is:
  • Is it possible to passively capture data sent by the smart meter through the rogue BTS ? If so, what are the implications?
A quite interesting subquestion would be:
  • Can we enable GPRS on the rogue BTS and ap- ply a MiTM attack?
Max Hovens <hovens.max=>kpmg.nl>
Rick van Galen <vanGalen.Rick=>kpmg.nl>

Nikos Sidiropoulos <Nikos.Sidiropoulos=>os3.nl>
R

P
2
91

HTTP Header analysis.

The order and presence of HTTP headers in requests are specific for certain browsers and applications. By correlating the HTTP requests, fingerprints can be made for specific browsers and applications. Next to the fingerprinting the `outliers' can be spotted which includes malware command and control communication over HTTP.

Research questions:
  • How to create reliable fingerprints for specific browser and applications?
  • Can it be used to spot malicious communication?
Renato Fontana <renato.fontana=>fox-it.com>

Roland Zegers <rzegers=>os3.nl>
R

P
2
92

Measure The Impact of Docker on Network I/O Performance.

Idea: Measure the impact of Docker on network IO and CPU performance.

Our company runs applications that exchange messages with business partner’s applications. These applications typically read a stream of UDP multicast datagrams and react on a fraction of those datagrams by sending messages via TCP to the business partner’s application. The reaction time, i.e. the latency between the incoming UDP datagram and the outgoing TCP response message is of extreme importance. The applications are written in C++ and Java und run under GNU/Linux using a kernel bypass network stack. We are now looking to use Docker to deploy and operate those applications. The research question is to know much performance loss this feature requires.

Assignment
In this assignment we ask to precisely quantify the impact of using Docker of the UDP-in to TCP-out reaction time of our applications under different load scenarios. The applications must be configured in exactly the same way before and after "dockerization", i.e. use the same versions of all required libraries and - in the case of Java applications - the same Java runtime environment. To support this task, we have internal documentation on how Docker is used at our company, and we also have tools that compare the running configuration of two Linux machines/containers.
Cees de Laat <delaat=>uva.nl>

Ardho Rohprimardho <Ardho.Rohprimardho=>os3.nl>
R

P
2
93

PTPv2 switch accuracy and hollow core fiber latency measurement.

Ideas:
  • Measure the Accuracy of PTP-enabled switches.
  • Measure latency and loss of Hollow core fiber against regular fiber.
Our company uses state of the art monitoring system that is the base for all our global LAN and WAN measurements, in a way that the base accuracy is not degraded. However we would now like to extend these existing methods to larger use cases involving multiple devices with separate oscillators. The research question is to know much accuracy loss this change introduces.

In this assignment you will be asked to precisely quantify the measurement error of two state-of-the-art switches synchronized with PTP. To do this, we will first ask you to re-create previous internal measurements, which ensures that you have the correct baseline for further tests. This will be aided by extensive internal documentation, both on the step-by-step methods, and the well-known final results. Using this baseline, you can then extend the testbed to multiple synchronized devices, and measure the accuracy loss that this entailed. This project will enable you to propose specific improvements to improve the accuracy that will later be suggested to the vendor itself.

Hollow core fiber is known to feature lower latency than regular fiber, due to a better refractive index than glass. However it also has a lower robustness by having higher losses per Km, which severely limits its reach for very long WAN links. The research question is confirming if this technology is ready to be deployed in industry "real-world" usages, by measuring the actual performance and availability characteristics of this technology when interconnected with standard connectors (eg, SFP+).
In this assignment you will be asked to precisely quantify the latency and the losses of a coil of hollow core fiber, and compare it to regular fiber on the same conditions (same length and same SFP+ terminations).

Confidentiality: All projects reports and results will be confidential (NDA)
Cees de Laat <delaat=>uva.nl>
Carlo Rengo <Carlo.Rengo=>os3.nl>
Martin Leucht <martin.leucht=>os3.nl>
R
P
2
95

Online events registration with minimal privacy violation.

There is a growing need in different fields of computer networking to register and store real-world network traffic. E.g., computer networking and security research, incident response and forensics, system engineering, network operations, legal compliancy & security auditing and education. Monitoring network traffic or being able to query a system for a certain event often is not possible due to privacy concerns. The same goes for analysis of complete data sets.

Network traffic logs or flows are built from data packet headers that exist of fields which (depending on the context) may contain sensitive information. For example network topology and user identities could be derived from IP addresses. If deep packet inspection (DPI) is used even more information like email or document content could be traced.

This research project is going to eliminate the negative impact on privacy with monitoring systems by using anonymisation or pseudonymisation. This is the process that modifies the network traffic data in such a way, that the identity of end user is protected. Ideally this process preserves the usefulness or utility of the network data. With pseudonymisation the original identity or values are recoverable by the organisation responsible for this process. This is not the case with anonymisation.
Jeroen van der Ham <jeroen.vanderham=>ncsc.nl>

Niels van Dijkhuizen <niels.vandijkhuizen=>os3.nl>
R

P
2
96

Security evaluation of smart-cars’ remote control applications.

Modern cars are now increasingly equipped with Wi-Fi hotspots, 3G Internet connection (BMW ConnectedDrive), keyless entry and smartphone apps to certain functions of your car remote to use. You will be studying one or more of these examples to identify the risks associated with these options. Cars will be made available to you to perform the needed tests.
Cees de Laat <delaat=>uva.nl>

Florian Ecard <fecard=>os3.nl>
R

P
2
97

The use of workflow topology observables in a Security Autonomous Response Network.

In the last few years, programmable networks have become more mainstream because of technologies such as Software Defined Networking and Network Function Virtualisation. These technologies allow software to define the topology and the scaling of a network.

Applications, possibly forming a complex work flow over multiple nodes, are then rolled out on top of this software controlled infrastructure. The applications deployed can be scaled out when there’s a spike in traffic and scale down when the need goes down.

The SNE Research Group at the University of Amsterdam recently discovered interesting relations in the work flow topology and the data flowing through. This thesis will investigate how the observables of a software controlled work flow topology are sensitive to variations in the properties of data on which the work flow operates. Changes in the observables, such as link bandwidth usage, might indicate a cyber-attack. This project will look into these observables and their applicability for Security Autonomous Response Networks.

Find here the used programs and data.
Marc Makkes <M.X.Makkes=>uva.nl>
Robert Meijer <robert.meijer=>tno.nl>

Adriaan Dens <adriaan.dens=>os3.nl>
R

P
2
98

StealthWare - Social Engineering Malware.

Pentesters and security professionals often use software beacons (ie. friendly malware) for social engineering assignments. Security professionals use these moles/moles to identify possible securiry vulnerabilities from insight the network. These moles are often deployed via methods such as social engineering or spearphising; sending e-mails with malicious attachments or handing out specialised USB-sticks. The development of specialized malware for social engineering assignments is not done on a large scale and the current supplyers offer limited functionality and/or ask for a substantial amount of licensing costs. For my research, I would like to investigate the effectiveness of already existing social engineering malware, further develop these and/or develop my own proof of concept. My research will focus on the usability within company environments and the detectability when using commonly found security software/hardware such as firewalls, network authentication, intrusion detection systems and virusscanners.
Jeroen van Beek <jeroen=>dexlab.nl>
Mark Bergman <mark=>bergman.nl>

Joey Dreijer <Joey.Dreijer=>os3.nl>
R

P
2
99

Forum post classification to support forensic investigations of illegal trade on the Dark Web.

Tor, currently the most popular and well known darknet, exhibits a wide range of illegal content and activity like drug and weapon trade, money laundering, hacking services, child pornography, and even assassination services. Much of the trading activity on Tor has been organized in online marketplaces, most of which are accompanied by a discussion forum. For law enforcement to get a grip on such flexible, large scale and often professionally organized trade of illegal goods and services, the availability of smart tools for monitoring and analysis of these marketplaces and forums is starting to become an urgent matter.

In this project we would like to explore techniques to support forensic investigation of online trade of illegal goods. Firstly, we would like to investigate if there are any useful correlations/cross-links between the Agora marketplace and the discussion forum it is accompanied by. For instance, are the users which post most actively on the discussion board also the users in the marketplace with the highest revenue?

As a support technique for the above goal, we want to explore the automatic classification of forum posts into different categories like trading feedback, shipping, scamming, product offers, etc. The underlying technique TNO likes to use for this is based on word2vec, which builds a space of word representations from a large data set, which can be used as features for clustering or classification. The project will include explorative experiments to find out if this model can indeed be used for forum post classification.

<martijn.spitters=>tno.nl> <stefan.verbruggen=>tno.nl>

Diana Rusu <diana.rusu=>os3.nl>
R

P
2

Presentations-rp2

The event is stretched over two days: Wednesday-Thursday July 1-2 th, 2015.

Wednesday July 1 th, 2015. Auditorium H0.08, FNWI, Sciencepark 904, Amsterdam.
Time
#RP Title
Name(s)
Loc RP
09h50
Welcome, introduction.
Cees de Laat

09h55 78
SURFdrive security. Jorian van Oostenbrugge SURFnet 1
10h15 7
Automated configuration of BGP on edge routers. Stella Vouteva, Tarcan Turgut nlnetlabs
2
10h40
Break


11h00 20
Creating your own Internet Factory. Ioannis Grafis TNO/UvA 2
11h20 97
The use of observables of work flow topologies in Security Autonomous Response Networks. Adriaan Dens TNO/UvA
2
11h40 45
Container Networking Solutions.
Joris Claassen UvA
2
12h00
Lunch



13h30 95
Online events registration with minimal privacy violation. Niels van Dijkhuizen ncsc
2
13h50 33
Zero-effort Monitoring Support. Julien Nyczak openfortress
2
14h10 98
StealthWare - Social Engineering Malware. Joey Dreijer dexlab
2
14h30 96
Security evaluation of smart-cars’ remote control applications. Florian Ecard Deloitte
2
14h50
Break


15h10 88
Traffic Analysis Visualization. Nikolaos Triantafyllidis fox-it 2
15h30 91
HTTP Header analytics. Roland Zegers fox-it 2
15h50 83
Improving the Performance of an IP-over-P2P Overlay for Nested Cloud Environments. Dragos Barosan VU
2
16h10
End




Thursday July 2 th, 2015, Auditorium C0.05, FNWI, Sciencepark 904, Amsterdam.
Time
#RP Title
Name(s)
Loc RP
09h30
Welcome, introduction. Cees de Laat

09h40 73
Securing the SDN Northbound Interface with the aid of Anomaly Detection. Jan Laan SIG
2
10h00 92
Measure The Impact of Docker on Network I/O Performance. Ardho Rohprimardho UvA
2
10h20 93
PTPv2 switch accuracy and hollow core fiber latency measurement. Carlo Rengo, Martin Leucht UvA
2
10h45
Break Break

11h00 11
Discovery method for a DNSSEC validating stub resolver. Xavier Torrent Gorjón nlnetlabs
2
11h20 12
Analysis of DNS Resolver Performance Measurements. Hamza Boulakhrif nlnetlabs 2
11h40 16
Functional breakdown of decentralised social networks. Wouter Miltenburg nlnet 2
12h00
Lunch



13h30 42
Extremely Secure Communication. Daniel Romão KPMG
2
13h50 48
Proving the wild jungle jump. James Gratchoff Riscure
2
14h10 50
Circumventing Forensic Live-Acuiqisition Tools On Linux. Yonne de Bruijn UvA
2
14h30 60
Pre-boot RAM acquisition and compression. Martijn Bogaard NFI
2
14h50
Break


15h10 63
Graph500 in the public cloud. Harm Dermois UVA
2
15h30 77
CoinShuffle anonymity in the Blockchain. Jan-Willem Selij TNO
2
15h50 8
Effective load balancing using Service Bus mechanisms in (multi-tenant) cloud based applications. Andrey Afanasyev UVA
1
16h10
Closing Cees de Laat & OS3 team

16h15
End





Presentations-rp1

Tuesday feb 3th, 13h30 in B.1.23 at Science Park 904 NL-1098XH Amsterdam.
Time #RP Title Name(s) LOC RP
13h00
Introduction Cees de Laat

13h10 7
Cloud powered services composition using Public Cloud PaaS platform.
Andrey Afanasyev UvA SNE 1
13h30 9
Security Intelligence Data Mining.
Nikolaos Triantafyllidis, Diana Rusu
Deloitte 1
13h55 74
Trusted Network Initiative to combat DDos attacks. Jeroen van Kessel, Alexandros Stavroulakis NLNET 1
14h20
break


14h40 17
Migration models for hosting companies.
Xander Lammertink
NLNET 1
15h00 81 Evaluation of the security of drone communication. Yonne de Bruijn, James Gratchoff KPMG 1
15h25
break


15h45 24
DANE verification test suite.
Guido Kroon, Hamza Boulakhrif
NLNET 1
16h10 57
Known plaintext attacks on encrypted ZIP files.
Dragos Barosan
tjaldur 1
16h30
feedback
team


16h40
End





Wednesday feb 4th in room B1.23 at Science Park 904 NL-1098XH Amsterdam.
Time #RP Title Name(s) LOC RP
10h30
Welcome, introduction. Cees de Laat

10h40 41
Protecting against relay attacks forging increased distance reports.
Xavier Torrent Gorjón
KPMG 1
10h00
Break



11h20 49
Feasibility and Deployment of BadUSB.
Stella Vouteva
KPMG 1
11h40 52
Evaluating the security of the KlikAanKlikUit Internet Control Station 1000.
Roland Zegers
KPMG 1
12h00
Lunch


13h00 67
Improving drive-by download detection: "boot. visit^n. kill. repeat."
Adriaan Dens, Martijn Bogaard
NCSC 1
13h25 14
Teleporting virtual machines.
Carlo Rengo, Harm Dermois
TNO 1
13h50 75
Peeling the Google Public DNS onion.
Ardho Rohprimardho, Tarcan Turgut
SURFnet 1
14h15
Break


14h35 76
Security automation and optimization using HP-NA.
Florian Ecard
AIR France 1
14h55 80
Preventing Common Attacks on Critical Infrastructure.
Koen Veelenturf, Wouter Miltenburg
KPN 1
15h20 38
Monitoring DNSSEC.
Martin Leucht, Julien Nyczak
OpenFortress 1
15h45
feedback
team


15h55
End



Out of normal schedule presentations:

Time Place D #RP Title Name(s) LOC RP
2015-06-12 16h00 B1.23 20 84
Smart Meter sensitive data.
Nikos Sidiropoulos
KPMG 2
2015-08-21 10h30 B1.23 20 35
Running FastCGI over SCTP.
Ioannis Gianoulatos
OpenFortress 2
2015-08-21 11h00 B1.23 20 99
Forum post classification to support forensic investigations of illegal trade on the Dark Web.
Diana Rusu
TNO 2