|
title
summary |
supervisor contact
students |
R
P |
1
/
2 |
1 |
Blockchain's Relationship
with Sovrin for Digital Self-Sovereign Identities.
Summary: Sovrin (sorvin.org) is a blockchain for
self-sovereign identities. TNO operates one of the
nodes of the Sovrin network. Sovrin enables easy
exchange and verification of identity information
(e.g. "age=18+") for business transactions.
Potential savings are estimated to be over 1 B€ per
year for just the Netherlands. However, Sovrin
provides only an underlying infrastructure.
Additional query-response protocols are needed. This
is being studied in e.g. the Techruption
Self-Sovereign-Identity-Framework (SSIF); project.
The research question is which functionalities are
needed in the protocols for this. The work includes
the development of a datamodel, as well as an
implementation that connects to the Sovrin network.
(2018-05) |
Oskar van Deventer
<oskar.vandeventer=>tno.nl>
|
|
|
2 |
Sensor data streaming
framework for Unity.
In order to build a Virtual Reality "digital twin"
of an existing technical framework (like a smart
factory), the static 3D representation needs to
"play" sensor data which either is directly
connected or comes from a stored snapshot. Although
a specific implementation of this already exists,
the student is asked to build a more generic
framework for this, which is also able to "play"
position data of parts of the infrastructure (for
example moving robots). This will enable the
research on virtually working on a digital twin
factory.
Research question:
- What are the requirements and limitations of a
seamless integration of smart factory sensor
data for a digital twin scenario?
There are existing network capabilities of Unity,
existing connectors from Unity to ROS (robot
operation system) for sensor data transmission and
an existing 3D model which uses position data.
The student is asked to:
- Build a generic infrastructure which can
either play live data or snapshot data.
- The sensor data will include position data,
but also other properties which are displayed in
graphs and should be visualized by 2D plots
within Unity.
The software framework will be published under an
open source license after the end of the project. |
Doris Aschenbrenner
<d.aschenbrenner=>tudelft.nl>
|
|
|
3 |
To optimize or not: on
the impact of architectural optimizations on
network performance.
Project description: Networks are becoming extremely
fast. On our testbed with 100Gbps network cards, we
can send up to 150 millions of packets per second
with under 1us of latency. To support such speeds,
many microarchitectural optimizations such as the
use of huge pages and direct cache placement of
network packets need to be in effect. Unfortunately,
these optimizations if not done carefully can
significantly harm performance or security. While
the security aspects are becoming clear [1], the
end-to-end performance impacts remain unknown. In
this project, you will investigate the performance
impacts of using huge pages and last level cache
management in high-performance networking
environments. If you were always wondering what
happens when receiving millions of packets at
nanosecond scale, this project is for you!
Requirements: C programming, knowledge of computer
architecture and operating systems internals.
[1] NetCAT: Practical Cache Attacks from the
Network, Security and Privacy 2020.
|
Animesh Trivedi <(animesh.trivedi=>vu.nl>
Kaveh Razavi <kaveh=>cs.vu.nl> |
|
|
4 |
The other faces of RDMA
virtualization.
Project description: RDMA is a technology that
enabled very efficient transfer of data over the
network. With 100Gbps RDMA-enabled network cards, it
is possible to send hundreds of millions of messages
with under 1us latency. Traditionally RDMA has
mostly been used in single-user setups in HPC
environments. However, recently RDMA technology has
been commoditized and used in general purpose
workloads such as key-value stores and transaction
processing. Major data centers such as Microsoft
Azure are already using this technology in their
backend services. It is not surprising that there is
now support for RDMA virtualization to make it
available to virtual machines. We would like you to
investigate the limitations of this new technology
in terms of isolation and quality of service between
different tenants.
Requirements: C programming, knowledge of computer
architecture and operating systems internals.
Supervisors: Animesh Trivedi and Kaveh Razavi, VU
Amsterdam
|
Animesh Trivedi <(animesh.trivedi=>vu.nl>
Kaveh Razavi <kaveh=>cs.vu.nl>
|
|
|
5 |
Verification of Objection
Location Data through Picture Data Mining
Techniques.
Shadows in the open give out more information about
the location of the objects in the pictures.
According to the positioning, length, and reflection
side of the shadow, verification of location
information found in the meta data of a picture can
be verified. The objective of this project is to
develop such algorithms that find freely available
images on the internet where tempering with the
location data has been performed. The deliverable
from this project are the location verification
algorithms, a live web service that verifies the
location information of the object, and a non-public
facing database that contains information about
images that had the location information in their
meta-data, removed or falsely altered.
|
Junaid Chaudhry <chaudhry=>ieee.org>
|
|
|
7 |
Artificial
Intelligence Assisted carving.
Problem Description:
Carving for data and locating files belonging to
Principal can be hard if we only use keywords. This
still requires a lot of manual work to create
keyword lists, which might not even be sufficient to
find what we are looking for.
Goal:
- Create a simple framework to detect documents
of a certain set (or company) within carved data
by utilizing machine learning. Closely related
to document identification.
The research project below is currently the only
open project at our Forensics department rated at
MSc level. Of course, if your students have any
ideas for a cybersecurity/forensics related project
they are always welcome to contact us.
|
Danny Kielman <danny.kielman=>fox-it.com>
Mattijs Dijkstra
<mattijs.dijkstra=>fox-it.com> |
|
|
8 |
Usage
Control in the Mobile Cloud.
Mobile clouds [1] aim to integrate mobile computing
and sensing with rich computational resources
offered by cloud back-ends. They are particularly
useful in services such as transportation,
healthcare and so on when used to collect, process
and present data from physical world. In this
thesis, we will focus on the usage control, in
particular privacy, of the collected data pertinent
to mobile clouds. Usage control[2] differs from
traditional access control by not only enforcing
security requirements on the release of data by also
on what happens afterwards. The thesis will involve
the following steps:
- Propose an architecture over cloud for "usage
control as a service" (extension of
authorization as a service) for the enforcement
of usage control policies
- Implement the architecture (compatible with
Openstack[3] and Android) and evaluate its
performance.
References
[1] https://en.wikipedia.org/wiki/Mobile_cloud_computing
[2] Jaehong Park, Ravi S. Sandhu: The UCONABC usage
control model. ACM Trans. Inf. Syst. Secur. 7(1):
128-174 (2004)
[3] https://en.wikipedia.org/wiki/OpenStack
[4] Slim Trabelsi, Jakub Sendor: "Sticky policies
for data control in the cloud" PST 2012: 75-80
|
Fatih Turkmen <F.Turkmen=>uva.nl>
Yuri Demchenko <y.demchenko=>uva.nl> |
|
|
9 |
Security of embedded
technology.
Analyzing the security of embedded technology, which
operates in an ever changing environment, is
Riscure's primary business. Therefore, research and
development (R&D) is of utmost importance for
Riscure to stay relevant. The R&D conducted at
Riscure focuses on four domains: software, hardware,
fault injection and side-channel analysis. Potential
SNE Master projects can be shaped around the topics
of any of these fields. We would like to invite
interested students to discuss a potential Research
Project at Riscure in any of the mentioned fields.
Projects will be shaped according to the
requirements of the SNE Master.
;
Please have a look at our website for more
information: https://www.riscure.com
;
Previous Research Projects conducted by SNE
students:
- https://www.os3.nl/_media/2013-2014/courses/rp1/p67_report.pdf
- https://www.os3.nl/_media/2011-2012/courses/rp2/p61_report.pdf
- http://rp.os3.nl/2014-2015/p48/report.pdf
- https://www.os3.nl/_media/2011-2012/courses/rp2/p19_report.pdf
If you want to see what the atmosphere is at
Riscure, please have a look at: https://vimeo.com/78065043
Please let us know If you have any additional
questions! |
Ronan Loftus <loftus=>riscure.com>
Alexandru Geana <Geana=>riscure.com>
Karolina Mrozek <Mrozek=>riscure.com>
Dana Geist <geist=>riscure.com>
|
|
|
11 |
Cross-blockchain oracle.
Interconnection between different blockchain
instances, and smart contracts residing on those,
will be essential for a thriving multi-blockchain
business ecosystem. Technologies like hashed
timelock contracts (HTLC) enable atomic swaps of
cryptocurrencies and tokens between blockchains. A
next challenge is the cross-blockchain oracle, where
the status of an oracle value on one blockchain
enables or prevents a transaction on another
blockchain.
The goal of this research project is to explore the
possibilities, impossibilities, trust assumptions,
security and options for a cross-blockchain oracle,
as well as to provide a minimal viable
implementation.
(2018-05)
|
Oskar van Deventer
<oskar.vandeventer=>tno.nl>
Maarten Everts <maarten.everts=>tno.nl> |
|
|
16 |
Network aware performance
optimization for Big Data applications using
coflows.
Optimizing data transmission is crucial to improve
the performance of data intensive applications. In
many cases, network traffic control plays a key role
in optimising data transmission especially when data
volumes are very large. In many cases,
data-intensive jobs can be divided into multiple
successive computation stages, e.g., in MapReduce
type jobs. A computation stage relies on the outputs
of the the previous stage and cannot start until all
its required inputs are in place. Inter-stage data
transfer involves a group of parallel flows, which
share the same performance goal such as minimising
the flow's completion time.
CoFlow is an application-aware network control model
for cluster-based data centric computing. The CoFlow
framework is able to schedule the network usage
based on the abstract application data flows (called
coflows). However, customizing CoFlow for different
application patterns, e.g., choosing proper network
scheduling strategies, is often difficult, in
particular when the high level job scheduling tools
have their own optimizing strategies.
The project aims to profile the behavior of CoFlow
with different computing platforms, e.g., Hadoop and
Spark etc.
- Review the existing CoFlow scheduling
strategies and related work
- Prototyping test applications using; big data
platforms (including Apache Hadoop, Spark, Hive,
Tez).
- Set up coflow test bed (Aalo, Varys etc.)
using existing CoFlow installations.
- Benchmark the behavior of CoFlow in different
application patterns, and characterise the
behavior.
Background reading:
- CoFlow introduction: http://www2.eecs.berkeley.edu/Pubs/TechRpts/2015/EECS-2015-211.pdf
- Junchao Wang, Huan Zhouy, Yang Huz, Cees de
Laatx and Zhiming Zhao, Deadline-Aware Coflow
Scheduling in a DAG, in NetCloud 2017, Hongkong,
to appear [upon request]
More info: Junchao Wang, Spiros Koulouzis, Zhiming
Zhao |
Zhiming Zhao <z.zhao=>uva.nl> |
|
|
17 |
Elastic data services for
time critical distributed workflows.
Large-scale observations over extended periods of
time are necessary for constructing and validating
models of the environment. Therefore, it is
necessary to provide advanced computational
networked infrastructure for transporting large
datasets and performing data-intensive processing.
Data infrastructures manage the lifecycle of
observation data and provide services for users and
workflows to discover, subscribe and obtain data for
different application purposes. In many cases,
applications have high performance requirements,
e.g., disaster early warning systems.
This project focuses on data aggregation and
processing use-cases from European research
infrastructures, and investigates how to optimise
infrastructures to meet critical time requirements
of data services, in particular for different
patterns of data-intensive workflow. The student
will use some initial software components [1]
developed in the ENVRIPLUS [2] and SWITCH [3]
projects, and will:
- Model the time constraints for the data
services and the characteristics of data access
patterns found in given use cases.
- Review the state of the art technologies for
optimising virtual infrastructures.
- Propose and prototype an elastic data service
solution based on a number of selected workflow
patterns.
- Evaluate the results using a use case
provided by an environmental research
infrastructure.
Reference:
-
https://staff.fnwi.uva.nl/z.zhao/software/drip/
-
http://www.envriplus.eu
-
http://www.switchproject.eu
More info: —Spiros Koulouzis, Paul Martin, Zhiming
Zhao |
Zhiming Zhao <z.zhao=>uva.nl> |
|
|
18 |
Contextual information
capture and analysis in data provenance.
Tracking the history of events and the evolution of
data plays a crucial role in data-centric
applications for ensuring reproducibility of
results, diagnosing faults, and performing
optimisation of data-flow. Data provenance systems
[1] are a typical solution, capturing and recording
the events generated in the course of a process
workflow using contextual metadata, and providing
querying and visualisation tools for use in
analysing such events later.
Conceptual models such as W3C PROV (and extensions
such as ProvONE), OPM and CERIF have been proposed
to describe data provenance, and a number of
different solutions have been developed. Choosing a
suitable provenance solution for a given workflow
system or data infrastructure requires consideration
of not only the high-level workflow or data
pipeline, but also performance issues such as the
overhead of event capture and the volume of
provenance data generated.
The project will be conducted in the context of EU
H2020 ENVRIPLUS project [1, 2]. The goal of this
project is to provide practical guidelines for
choosing provenance solutions. This entails:
- Reviewing the state of the art for provenance
systems.
- Prototyping sample workflows that demonstrate
selected provenance models.
- Benchmarking the results of sample workflows,
and defining guidelines for choosing between
different provenance solutions (considering
metadata, logging, analytics, etc.).
References:
- About
project: http://www.envriplus.eu
- Provenance background in ENVRIPLUS: https://surfdrive.surf.nl/files/index.php/s/uRa1AdyURMtYxbb
- Michael Gerhards, Volker Sander, Torsten
Matzerath, Adam Belloum, Dmitry Vasunin, and
Ammar Benabdelkader. 2011. Provenance
opportunities for WS-VLAM: an exploration of an
e-science and an e-business approach. In
Proceedings of the 6th workshop on Workflows in
support of large-scale science (WORKS '11). http://dx.doi.org/10.1145/2110497.2110505
More info: - Zhiming Zhao, Adam Belloum, Paul Martin |
Zhiming Zhao <z.zhao=>uva.nl> |
|
|
19 |
Profiling Partitioning
Mechanisms for Graphs with Different
Characteristics.
In computer systems, graph is an important model for
describing many things, such as workflows, virtual
infrastructures, ontological model etc. Partitioning
is an frequently used graph operation in the
contexts like parallizing workflow execution,
mapping networked infrastructures onto distributed
data centers [1], and controlling load balance of
resources. However, developing an effective
partition solution is often not easy; it is often a
complex optimization issue involves constraints like
system performance and cost constraints.;
A comprehensive benchmark on graph partitioning
mechanisms is helpful to choose a partitioning
solver for a specific model. This portfolio can also
give advices on how to partition based on the
characteristics of the graph. This project aims at
benchmarking the existing partition algorithms for
graphs with different characteristics, and profiling
their applicability for specific type of graphs.;
This project will be conducted in the context of EU
SWITCH [2] project. the students will:
- Review the state of the art of the graph
partitioning algorithms and related tools, such
as Chaco, METIS and KaHIP, etc.
- Investigate how to define the characteristics
of a graph, such as sparse graph, skewed graph,
etc. This can also be discussed with different
graph models, like planar graph, DAG,
hypergraph, etc.
- Build a benchmark for different types of
graphs with various partitioning mechanisms and
find the relationship behind.;
- Discuss about how to choose a partitioning
mechanism based on the graph characteristics.
Reading material:
- Zhou, H., Hu Y., Wang, J., Martin, P., de
Laat, C. and Zhao, Z., (2016) Fast and Dynamic
Resource Provisioning for Quality Critical Cloud
Applications, IEEE International Symposium On
Real-time Computing (ISORC) 2016, York UK
http://dx.doi.org/10.1109/ISORC.2016.22
- SWITCH: www.switchproject.eu
More info: Huan Zhou, Arie Taal, Zhiming Zhao
|
Zhiming Zhao <z.zhao=>uva.nl> |
|
|
20 |
Auto-Tuning for GPU
Pipelines and Fused Kernels.
Achieving high performance on many-core accelerators
is a complex task, even for experienced programmers.
This task is made even more challenging by the fact
that, to achieve high performance, code optimization
is not enough, and auto-tuning is often necessary.
The reason for this is that computational kernels
running on many-core accelerators need ad-hoc
configurations that are a function of kernel, input,
and accelerator characteristics to achieve high
performance. However, tuning kernels in isolation
may not be the best strategy for all scenarios.
Imagine having a pipeline that is composed by a
certain number of computational kernels. You can
tune each of these kernels in isolation, and find
the optimal configuration for each of them. Then you
can use these configurations in the pipeline, and
achieve some level of performance. But these kernels
may depend on each other, and may also influence
each other. What if the choice of a certain memory
layout for one kernel causes performance degradation
on another kernel?
One of the existing optimization strategies to deal
with pipelines is to fuse kernels together, to
simplify execution patterns and decrease overhead.
In this project we aim to measure the performance of
accelerated pipelines in three different tuning
scenarios:
- tuning each component in isolation,
- tuning the pipeline as a whole, and
- tuning the fused kernel. Measuring the
performance of one or more pipelines in these
scenarios we hope to, on one level, being able
to determine which is the best strategy for the
specific pipelines on different hardware
platform, and on another level we hope to better
understand which are the characteristics that
influence this behavior.
|
Rob van Nieuwpoort
<R.vanNieuwpoort=>uva.nl> |
|
|
22 |
Auto-tuning for Power
Efficiency.
Auto-tuning is a well-known optimization technique
in computer science. It has been used to ease the
manual optimization process that is traditionally
performed by programmers, and to maximize the
performance portability. Auto-tuning works by just
executing the code that has to be tuned many times
on a small problem set, with different tuning
parameters. The best performing version is than
subsequently used for the real problems. Tuning can
be done with application-specific parameters
(different algorithms, granularity, convergence
heuristics, etc) or platform parameters (number of
parallel threads used, compiler flags, etc).
For this project, we apply auto-tuning on GPUs. We
have several GPU applications where the absolute
performance is not the most important bottleneck for
the application in the real world. Instead the power
dissipation of the total system is critical. This
can be due to the enormous scale of the application,
or because the application must run in an embedded
device. An example of the first is the Square
Kilometre Array, a large radio telescope that
currently is under construction. With current
technology, it will need more power than all of the
Netherlands combined. In embedded systems, power
usage can be critical as well. For instance, we have
GPU codes that make images for radar systems in
drones. The weight and power limitations are an
important bottleneck (batteries are heavy).
In this project, we use power dissipation as the
evaluation function for the auto-tuning system.
Earlier work by others investigated this, but only
for a single compute-bound application. However,
many realistic applications are memory-bound. This
is a problem, because loading a value from the L1
cache can already take 7-15x more energy than an
instruction that only performs a computation (e.g.,
multiply).
There also are interesting platform parameters than
can be changed in this context. It is possible to
change both core and memory clock frequencies, for
instance. It will be interesting to if we can at
runtime, achieve the optimal balance between these
frequencies.
We want to perform auto-tuning on a set of GPU
benchmark applications that we developed. |
Rob van Nieuwpoort
<R.vanNieuwpoort=>uva.nl> |
|
|
23 |
Applying and Generalizing
Data Locality Abstractions for Parallel Programs.
TIDA is a library for high-level programming of
parallel applications, focusing on data locality.
TIDA has been shown to work well for grid-based
operations, like stencils and convolutions. These
are in an important building block for many
simulations in astrophysics, climate simulations and
water management, for instance. The TIDA paper gives
more details on the programming model.
This projects aims to achieve several things and
answer several research questions:
TIDA currently only works with up to 3D. In many
applications we have, higher dimensionalities are
needed. Can we generalize the model to N dimensions?
The model currently only supports a two-level
hierarchy of data locality. However, modern memory
systems often have many more levels, both on CPUs
and GPUs (e.g., L1, L2 and L3 cache, main memory,
memory banks coupled to a different core, etc). Can
we generalize the model to support N-level memory
hierarchies?
The current implementation only works on CPUs, can
we generalize to GPUs as well?
Given the above generalizations, can we still
implement the model efficiently? How should we
perform the mapping from the abstract hierarchical
model to a real physical memory system?
We want to test the new extended model on a real
application. We have examples available in many
domains. The student can pick one that is of
interest to her/him. |
Rob van Nieuwpoort
<R.vanNieuwpoort=>uva.nl> |
|
|
24 |
Ethereum Smart Contract
Fuzz Testing.
An Ethereum smart contract can be seen as a computer
program that runs on the Ethereum Virtual Machine
(EVM), with the ability to accept, hold and transfer
funds programmatically. Once a smart contract has
been place on the blockchain, it can be executed by
anyone. Furthermore, many smart contracts accept
user input. Because smart contracts operate on a
cryptocurrency with real value, security of smart
contracts is of the utmost importance. I would like
to create a smart contract fuzzer that will check
for unexpected behaviour or crashes of the EVM.
Based on preliminary research, such a fuzzer does
not exist yet.
|
Rodrigo Marcos
<rodrigo.marcos=>secforce.com>
|
|
|
25 |
Smart contracts specified
as contracts.
Developing a distributed state of mind: from control
flow to control structure
The concepts of control flow, of data structure, as
well as that of data flow are well established in
the computational literature; in contrast, one can
find different definitions of control structures,
and typically these are not associated to the common
use of the term, referring to the power
relationships holding in society or in
organizations.
The goal of this project is the design and
development of a social architecture language that
cross-compile in a modern concurrent programming
language (Rust, Go, or Scala), in order to make
explicit a multi-threaded, distributed state of
mind, following results obtained in agent-based
programming. The starting point will be a minimal
language subset of AgentSpeak(L).
Potential applications: controlled machine learning
for Responsible AI, control of distributed
computation |
Giovanni Sileno <G.Sileno=>uva.nl>
Mosata Mohajeriparizi
<m.mohajeriparizi=>uva.nl> |
|
|
26 |
Zero Trust Validation.
ON2IT advocates the Zero Trust
Validation conceptual strategy [1] to
strengthen information security at the architectural
level. Zero Trust is often mistakenly perceived as
an architectural approach. However, it is, in the
end, a strategic approach towards protecting assets
regardless of location. To enable this approach,
controls are needed to provide sufficient insight
(visibility), to exert control, and to provide
operational feedback. However, these controls/probes
are not naturally available in all environments.
Finding ways to embed such controls, and
finding/applying them, can be challenging,
especially in the context of containerized, cloud
and virtualized workflows.
At the strategic level, Zero Trust is not
sufficiently perceived as a value contributor. At
the managerial level, it is perceived mainly as an
architectural ‘toy’. This makes it hard to translate
a Zero Trust strategic approach to the operational
level; there’s a lack overall coherence. For this
reason, ON2IT developed a Zero Trust Readiness
Assessment framework which facilitates testing the
readiness level on three levels: governance,
management and operations.
Research (sub)questions that emerge:
- What is missing in the current approach of ZTA
to make it resonate with the board?
- What are Critical Success Factors for
drafting and implementing ZTA?
- What is an easy to consume capability
maturity or readiness model for the adoption
of ZTA that guides boards and management
teams in making the right decisions?
- What does a management portal with
associated KPIs need to offer in order to
enable board and management to manage and
monitor the ZTA implementation process and
take appropriate ownership?
- How do we add the necessary controls and
leverage control and monitoring facilitities
thusly provided efficiently?
- Zero
Trust Validation
- "On
Exploring Research Methods for Business
Information Security Alignment and Artefact
Engineering" by Yuri Bobbert, University of
Antwerp
|
Jeroen Scheerder
<Jeroen.Scheerder=>on2it.net>
|
|
|
28 |
OSINT Washing Street.
At the moment more and more OSINT is available via
all kinds of sources,a lot them are legit services
that are used by malicious actors. Examples are
github, pastebin, twitter etc. If you look at
pastebin data you might find IOC/TTPS but usually
the payloads delivered in many stages so it is
important to have a system that follows the path
until it finds the real payload. The question here
is how can you build a generic pipeline that
unravels data like a matryoshka doll. So no matter
the input, the pipeline will try to decode, query or
perform whatever relevant action that is needed.
This would result in better insight in the later
stages of an attack. An example of a framework using
the method is Stoq
(https://github.com/PUNCH-Cyber/stoq), but this
lakes research in usability and if the results are
added value compared to other osint sources. |
Joao Novaismarques
<joao.novaismarques=>kpn.com> |
|
|
29 |
Building an
open-source, flexible, large-scale static code
analyzer.
Background information
Data drives business, and maybe even the world.
Businesses that make it their business to gather
data are often aggregators of clientside generated
data. Clientside generated data, however, is
inherently untrustworthy. Malicious users can
construct their data to exploit careless, or naive,
programming and use this malicious, untrusted data
to steal information or even take over systems.
It is no surprise that large companies such as
Google, Facebook and Yahoo spend considerable
resources in securing their own systems against
wouldbe attackers. Generally, many methods have
been developed to make untrusted data cross the
trustboundary to trusted data, and effectively make
malicious data harmless. However, securing your
systems against malicious data often requires
expertise beyond what even skilled programmers might
reasonably possess.
Problem description
Ideally, tools that analyze code for vulnerabilities
would be used to detect common security issues. Such
tools, or static code analyzers, exist, but are
either outdated
(http://ripsscanner.sourceforge.net/) or part of
very expensive commercial packages
(https://www.checkmarx.com/ and
http://armorize.com/). Next to the need for an
opensource alternative to the previously mentioned
tools, we also need to look at increasing our scope.
Rather than focusing on a single codebase, the tool
would ideally be able to scan many remote,
largescale repositories and report the findings
back in an easily accessible way.
An interesting target for this research would be
very popular, opensource (at this stage) Content
Management Systems (CMSs), and specifically plugins
created for these CMSs. CMS cores are held to a very
high coding standard and are often relatively
secure. Plugins, however, are necessarily less so,
but are generally as popular as the CMSs they’re
created for. This is problematic, because an
insecure plugin is as dangerous as an insecure CMS.
Experienced programmers and security experts
generally audit the most popular plugins, but this
is: a) very timeintensive, b) prone to errors and
c) of limited scope, ie not every plugin can be
audited. For example, if it was feasible to audit
all aspects of a CMS repository (CMS core and
plugins), the DigiNotar debacle could have easily
been avoided.
Research proposal
Your research would consist of extending our
proofofconcept static code analyzer written in
Python and using it to scan code repositories,
possibly of some major CMSs and their plugins, for
security issues and finding innovative ways of
reporting on the massive amount of possible issues
you are sure to find. Help others keep our data that
little bit more safe. |
Patrick Jagusiak
<patrick.jagusiak=>dongit.nl>
Wouter van Dongen
<wouter.vandongen=>dongit.nl>
|
|
|
30 |
Developing a Distributed
State of Mind.
A system required to be autonomous needs to be more
than just a computational black box that produces a
set of outputs from a set of inputs. Interpreted as
an agent provided with (some degree of) rationality,
it should act based on desires, goals and internal
knowledge for justifying its decisions. One could
then imagine a software agent much like a human
being or a human group, with multiple parallel
threads of thoughts and considerations which more
than often are in conflict with each other. This
distributed view contrasts the common centralized
view used in agent-based programming,and opens up to
potential cross-fertilization with distributed
computing applications which for the moment are for
the most unexplored.
The goal of this project is the design and
development of an efficient agent architecture in a
modern concurrent programming language (Rust, Go, or
Scala), in order to make explicit a multi-threaded,
distributed state of mind. |
Giovanni Sileno <G.Sileno=>uva.nl>
Mostafa Mohajeriparizi
<m.mohajeriparizi=>uva.nl>
|
|
|
32 |
Development of a control
framework to guaranty the security of a
collaborative open-source project.
We’re now living in an information society, and
everyone is expecting to be able to find everything
on the Web. IT developers make no exception and
spend a large part of their working hours searching
for and reusing part of codes found on Public
Repositories (e.g. GitHub, Gitlab …) or web forums
(e.g. StackOverflow).
The use of open-source software has long been seen
as a secure alternative as the code is available for
review to everyone, and as a result, bugs and
vulnerability should more easily be found and fixed.
Multiple incidents related to the use of Open-source
software (NPM, Gentoo, Homebrew) have shown that the
greater security of open-source components turned
out to be theoretical.
This research aims to highlight the root causes of
major recent incidents related to open-source
collaborative projects, as well as to propose a
global open-source security framework that could
address those issues.
References:
|
Tim Dijkhuizen <Dijkhuizen.Tim=>kpmg.nl>
Ruben Koeze <Koeze.Ruben=>kpmg.nl>
|
|
|
35 |
Security of IoT
communication protocols on the AWS platform.
In January 2020, Jason and Hoang from the OS3 master
worked on the project "Security Evaluation on Amazon
Web Services’ REST API Authentication Protocol
Signature Version 4"[1]. This project has shown the
resilience of the Sigv4 authentication mechanism for
HTTP protocol communications.
Since June 2017, AWS released a service called AWS
Greengrass[2] that can be used as an intermediate
server for low connectivity devices running AWS IoT
SDK[3]. This is an interesting configuration as it
allows to further challenge Sigv4 authentication on
a disconnected environment using the MQTT protocol.
Reference:
- https://homepages.staff.os3.nl/~delaat/rp/2019-2020/p65/report.pdf
- https://docs.aws.amazon.com/greengrass/latest/developerguide/what-is-gg.html
-
https://github.com/aws/aws-iot-device-sdk-python
|
Tim Dijkhuizen <Dijkhuizen.Tim=>kpmg.nl>
Ruben Koeze <Koeze.Ruben=>kpmg.nl> |
|
|
38 |
Threat modeling
on a concrete Digital Data Market.
Security and sovereignty are top concerns for data
federations among normally competing organizations.
Digital Data Marketplace (DDMs) are emerging as
architectures to support this mode of interactions.
We have designed an auditable secure network overlay
for multi-domain distributed applications to
facilitate such trust-worthy data sharing. We prove
our concepts with a running demonstration which
shows how a simple workflow can run across
organizational boundaries.
It is important to know how secure the overlay
network is for data federation applications. You
will do a threat modeling with a concrete DDM use
case.
- Discover a detailed outline of attack vectors.
- Investigate which attacks are already
successfully detected or avoided with current
secure mechanisms.
- Discover remaining security holes and propose
possible countermeasures.
|
"Zhang, Lu" <l.zhang2=>uva.nl> |
|
|
40 |
Version
management of project files in ICS.
Research in Industrial Control Systems: It is
difficult to have proper version management of the
project files as they usually are stored offline. We
would like to come up with a solution to backup and
store project files in real time on a server and
have the capability to revert back/take snapshots
etc. of the versions used. Sort of
Puppet/Chef/Ansible but then for ICS.
|
<mvanveen=>deloitte.nl>
|
|
|
43 |
Future tooling
and cyber defense strategy for ICS.
Research in Industrial Control Systems: Is zero
trust networking possible in ICS? This is one of the
questions we are wondering about to sharpen our
vision and story around where ICS security is going
and which solutions are emerging. |
Michel van Veen <mvanveen=>deloitte.nl> |
|
|
45 |
End-to-end
encryption for browser-based meeting technologies.
Investigating the possibilities and limitations of
end-to-end encrypted browser-based video
conferencing. With a specific focus on security and
preserving privacy.
- What are possible approaches?
- How would they compare to each other?
|
Jan Freudenreich
<jfreudenreich=>deloitte.nl>
|
|
|
46 |
Evaluation of
the Jitsi Meet approach for end-to-end encrypted
browser-based video conferencing.
Determining the security of the library,
implementation and the environment setup.
|
Jan Freudenreich
<jfreudenreich=>deloitte.nl> |
|
|
47 |
Acceleration of
Microsoft SEAL library using dedicated hardware
(FPGAs).
Homomorphic encryption allows to process encrypted
data making it possible to access services sharing
only encrypted information to the service provider.
However, performance of homomorphic encryption can
be limited for certain applications. The Microsoft
Simple Encrypted Arithmetic Library (Microsoft SEAL)
is an homomorphic encryption library (https://github.com/Microsoft/SEAL)
released as opensource under the MIT license. The
overall goal of the project is to improve the
performance of the library accelerating it by means
of dedicated hardware.
The specific use case consider here is the
acceleration of particularly critical routines using
reconfigurable hardware (FPGA). The research will
address the following challenges:
- The profiling of library components to
identify the main bottlenecks. The functions
that would benefit more from the hardware
acceleration will be identified and a small
subset of them would be selected.
- An optimized hardware architecture for the
functions previously selected will be designed
and implemented using an hardware description
language (VHDL or Verilog)
- The performance of the hardware design will be
evaluated using state of the art design tools
for reconfigurable hardware (FPGA) and the speed
up achieved on the overall library will be
estimated
|
Francesco Regazzoni <f.regazzoni=>uva.nl>
|
|
|
48 |
High level
synthesis and physical attacks resistance.
High level synthesis is a well known approach that
allows designers to quickly explore different
hardware optimization starting from and high level
behavioral code. Despite its widespread use, the
effects of such an approach on security have not
been explored in depth yet. This project focuses on
physical attacks, where the adversary infers secret
information exploiting the implementation
weaknesses, and aims at exploring the effect of
different optimizations of high level synthesis on
physical attacks.
The specific use case consider here is the analysis
of resistance against physical attacks of
particularly critical blocks of cryptography
algorithms when implemented in hardware using high
level synthesis. The research will address the
following challenges:
- The selection of the few critical blocks to be
explored and the implementation of them using
high level behavioral language
- The realization of different versions of the
previously selected blocks using high level
synthesis tools.
- The collection (or the simulation) of the
traces needed to mount the side channel attack
- The security analysis of each version of each
block and the analysis of the effects on
security of the particular optimization used to
produce each version.
|
Francesco Regazzoni <f.regazzoni=>uva.nl>
|
|
|
49 |
Embedded FPGAs
(eFPGAs) for security.
Reconfigurable hardware offers the possibility to
easily reprogram its functionalities on the field,
making it suitable for applications where some
flexibility is required. Among these applications,
there is certainly cryptography, especially when
implemented in embedded and cyber-physical systems.
These devices, often, have a life time that is much
longer than the usual consumer electronics, thus
they need to provide the so called crypto-agility
(the capability of update an existing cryptographic
algorithm). Reconfigurable hardware is currently
designed for general purpose, while better
performance could be reached by using reconfigurable
blocks specifically designed for cryptography
(https://eprint.iacr.org/2018/724.pdf). In this
project, a design flow open source design tools have
to be explored to build a design flow for converting
HDL designs into a physical implementation of the
algorithm on the novel reconfigurable block.
The specific use case considered here is the
exploration of possible architectures for connecting
the novel reconfigurable block and the estimation of
the overhead of the connections. The research will
address the following challenges:
- acquire familiarity with the VTR tool and
other relevant design tools through discussion
with the supervisor, online tutorials and
example implementations,
- develop a custom design flow for the
reconfigurable block presented by Mentens et al.
- validate the design flow and the eFPGA
architecture though a number of existing
cryptographic benchmark circuits.
This thesis is in collaboration with KU Leuven
(Prof. Nele Mentens) |
Francesco Regazzoni <f.regazzoni=>uva.nl>
|
|
|
50 |
Approximate
computing and side channels.
Approximate computer is an emerging computing
paradigm where the precision of the computation is
traded with other metrics such as energy consumption
or performance. This paradigm has been shown to be
effective in various application, including machine
learning and video streaming. However, the effect of
approximate computing on security are still unknown.
This project investigates the effects of approximate
computing paradigm on side channel attacks.
The specific use case consider here is the
exploration of the resistance against power analysis
attacks of devices when classical techniques used in
the approximate computing paradigm to reduce the
energy consumption (such as voltage scaling) are
applied. The research will address the following
challenges:
- Selection of the most appropriated techniques
for energy saving among the ones used in
approximate computing paradigm
- Realization of a number of simple
cryptographic benchmarks using HDL (VHDL of
Verilog) language
- Simulation of the power consumption in the
different scenarios
- Evaluation of the side channel resistance of
each
This thesis is in collaboration with University of
Stuttgart (Prof. Ilia Polian) |
Francesco Regazzoni <f.regazzoni=>uva.nl>
|
|
|
51 |
Decentralize a
legacy application using blockchain: a crowd
journalism case study.
Blockchain technologies demonstrated a huge
potential for application developers and operators
to improve service trustworthiness, e.g., in
logistics, finance and provenance. The migration of
a centralized distributed application into a
decentralized paradigm often requires not only a
conceptual re-design of the application
architecture, but also profound understanding of the
technical integration between business logic with
the blockchain technologies. This project, we will
use the social network application (crowd
journalism) as a test case to investigate the
integration possibilities between a legacy system
and the blockchain. Key activities in the project:
- investigate the integration possibilities
between social network application and
permissioned blockchain technologies,
- make a rapid prototype to demonstrate the
feasibility, and
- assess the operational cost of blockchain
services.
The software of the crowd journalism will be
provided by a SME partner of EU ARTICONF project.
References: http://www.articonf.eu
|
Zhiming Zhao <z.zhao=>uva.nl>
|
|
|
52 |
Location aware
data processing in the cloud environment.
Data intensive applications are often workflow
involving distributed data sources and services.
When the data volumes are very large, especially
with different access constraints, the workflow
system has to decide suitable locations to process
the data and to deliver the results. In this
project, we perform a case study of eco-Lida data
from different European countries; the processing
will be done using the test bed offered by the
European Open Science Cloud. The project will
investigate data location aware scheduling
strategies, and service automation technologies for
workflow execution. The data processing pipeline and
data sources in the use case will be provided by
partners in the EU Lifewatch, and the test bed will
be provided by the European Open Science Cloud
earlier adopter program. |
Zhiming Zhao <z.zhao=>uva.nl> |
|
|
55 |
Wi-Fi 6 - BSS
colouring in the home environment
BSS Colouring aims to significantly improve the
end-user experience in terms of throughput and
latency in high density wireless environments, for
example in urban areas. A major cause of slow
working in dense Wi-Fi environments is mutual
interference between access points that share the
same channel. Wi-Fi copes with this co-channel
interference (CCI) by Carrier Sense with Multiple
Access Collision Avoidance (CSMA/CA): a radio
wanting to transmit first listens on its frequency,
and if it hears another transmission in process it
waits a while before trying again. CCI is not
actually an interference but more a sort of
congestion. It hinders the performance by increasing
the wait time as the same channel is used by
different devices. The CCI forces other devices to
defer transmissions and wait in a queue until the
first device finishes using the transmission line
and the channel is free. Even if two APs are too far
apart for them to detect each other's transmissions
directly, a client of either in between them can
effectively trigger collision avoidance when one AP
hears it talking to the other. Unnecessary medium
contention overhead that occurs when too many Access
Points (APs) and clients hear each other on the same
channel is called an overlapping basic service set
(OBSS).
BSS colouring is a feature in the IEEE 802.11ax
standard to address medium contention overhead due
to OBSS by assigning a different "colour", a number
between 1 and 63 that is added to the PHY header of
the 802.11ax frame, to each BSS in an environment.
When an 802.11ax radio is listening to the medium
and hears the PHY header of an 802.11ax frame sent
by another 802.11ax radio, the listening radio will
check the BSS colour bit of the transmitting radio.
Channel access is dependent on the colour detected:
- If the colour bit is the same, then the frame
is considered an intra-BSS transmission, and the
Preamble Detection (PD) threshold remains
unchanged, in other words normal CSMA/CA process
is followed.
- If the colour is different, then the frame is
considered an inter-BSS transmission. The
station increases its PD threshold to limit the
range of physical carrier sense so as to reduce
the chance of contention with the neighbour AP.
This is important because the number of Wi-Fi
stations being used continues to increase, and the
space between them is decreasing. BSS colouring
gives the 802.11ax standard, also labelled Wi-Fi 6
by the Wi-Fi Alliance, the ability to discover
spatial reuse opportunities. And spatial reuse can
be exploited to enable more parallel conversations
within the same physical space.
Our expectation is BSS Colouring will increase
Medium Access Control (MAC) efficiency and the user
should experience lower latency and an increase in
throughput. The noise floor of the operating channel
however will deteriorate, reducing the
Signal-to-Noise Ratio (SNR) potentially resulting in
a drop in the Modulation and Coding Scheme (MCS)
rate. In short, BSS colour improves MAC efficiency
at the cost of noise in the physical (PHY) layer.
Legacy 802.11a/b/g/n clients and APs will not be
able to interpret the colour bits because they use a
different PHY header format.
The main research questions are:
- What is the impact of using BSS colouring on
the performance in terms of throughput and
latency of the own and neighbour BSS taking the
increased MAC efficiency as well as the
increased noise into account?
- What is the impact of introducing BSS
colouring in the home environment in combination
with neighbouring legacy APs and clients?
- How does the increased use of mesh Wi-Fi
solutions combine with the introduction of BSS
colouring?
The students will use the MATLAB WLAN toolbox for
simulation, knowledge of MATLAB and the Wi-Fi PHY
and MAC layer is key for this research project. |
Arjan van der Vegt
<avdvegt=>libertyglobal.com>
|
|
|
61 |
Trust
bootstrapping for secure data exchange
infrastructure provisioned on demand.
Data exchange in the data market requires more than
just end to end secure connection that is well
supported by VPN. Data market and data exchange that
could be integrated into complex research,
industrial and business processes may require
connection to data market and data exchange services
supporting data search, combination and quality
assurance as well as delivery to data processing or
execution facilities. This can be achieved by
providing trusted data exchange and execution
environment on demand using cloud hosting platform.
This project will (1) investigate current state of
the art in trust management, trust bootstrapping and
key management in provisioned on demand cloud based
services; (2) test several available solutions, and
(3) implement a selected solution in a working
prototype.
|
Yuri Demchenko <y.demchenko=>uva.nl> |
|
|
62 |
Supporting
infrastructure for distributed data exchange
scenarios when using IDS (Industrial Data Spaces)
Connector.
This project will investigate the International Data
Spaces Association (IDSA) Reference Architecture
Model (RAM) and the proposed IDS Connector and its
applicability to complex data exchange scenarios
that involve multiple data sources/suppliers and
multiple data consumers in a complex multi-staged
data centric workflow.
The project will assess the UCON library providing
native IDS Connector implementation, test it in a
proposed scenario that supports one of general uses
cases for secure and trusted data exchange, and
identify necessary infrastructure components to
support IDS Connector and RAM such as trust
management, data identification and lineage,
multi-stage session management, etc.
|
Yuri Demchenko <y.demchenko=>uva.nl>
|
|
|
63 |
Security
projects at KPN.
The following are some ideas for RP that we would
like to propose from KPN Security. Moreover, I would
like to mention that we are open for other ideas as
long as those are related to the proposed ones. To
give a better impression, I added the "rough ideas"
section as example of topics we would be interested
to supervise. We are more than happy to assist the
students at the moment of finding the right angle
for their research.
Info stealer landscape 2021
Create an overview
of the info stealer landscape 2020/2021. What
stealers are used, how do they work, what are
similarities, config extraction of samples, how to
detect the info stealers. Hoping this could lead
to something similar as
https://azorult-tracker.net/ where data is
published from automatically analyzing info
stealers. An example of what can be used for that
is openCTI
(https://github.com/OpenCTI-Platform/opencti).
Hacked wordpress sites
In today’s threat
landscape several malicious groups including
Revil, Emotet, Qakbot and Dridex are using
compromised Wordpress website to aid in their
operations. This RP would be on analyzing how many
of those vulnerable websites are out there using
OSINT techniques like urlscan.io, Shodan and
Riskiq. Also identifying the vulnerable components
and if they are hacked already would help fight
this problem. Ideally some notification system is
put in place to warn owners and hosting companies
about their website.
Rough ideas (freestyle)
- Literature review of the state of the art of
a give malware category ( Trojans, Info
stealers, ransomware, etc) some examples:
- What cloud services are been the most abused
to for distributing malware? (Pastebin, GitHub,
drive, Dropbox, etc) . URLHaus, Public
sandboxes, and other sources could be starting
points. (Curious about cdns and social
applications like discord, telegram , and
others)
- Looking at raccine
https://github.com/Neo23x0/Raccine, what steps
do ransomware malware take and what
possibilities are there to create other vaccines
or how to improve Raccine.
- Building a non detectable web scraper
- A lot of time data from darknet is
available on website and no option for an
API/feed is available. These website tend to
have scraping detection is several ways, this
could be rate limiting to "human" behavior
checks. What is the best way to scrape these
type of website in such a way that is is hard
to impossible to detect a bot is retrieving
data. Can this be done while still maintaining
a good pace of retrieving data.
- Malware Aquarium
- Inspired by XKCD: https://xkcd.com/350/.
Can you create an open source malware
aquarium. There are several challenges in how
to setup up, how to get infection going,
keeping it contained and how to keep track of
everything (alerts on changes)?
|
Joao Novaismarques
<joao.novaismarques=>kpn.com> |
|
|
64 |
Assessing data
remnants in modern smartphones after factory
reset.
Description:
Factory reset is a function built in modern
smartphones which restores the settings of a device
to the state it was shipped from the factory. While
its user data becomes inaccessible through the
device's user interface, research performed in 2018
reports that mobile forensic techniques can still
recover old data even after a smartphone undergoes
factory reset.
In recent smartphones, however, multiple security
measures are implemented by the vendors due to
growing concerns over security and privacy. The
implementation of encryption is especially supposed
to be effective for protecting user data from an
attacker after factory reset. In the meantime, its
impact on the digital forensics domain has not yet
been explored.
In this project, the effectiveness of factory reset
to digital forensics will be evaluated using modern
smartphones. Using the latest digital forensic
techniques, data remnants in factory reset
smartphones are investigated, and its applicability
to forensic domain will be evaluated.
Related research:
|
Zeno Geradts <zeno=>holmes.nl>
Aya Fukami <ayaf=>safeguardcyber.com> |
|
|
69 |
Assessing data remnants in
modern smartphones after factory reset.
Factory reset is a function built in modern
smartphones which restores the settings of a device
to the state it was shipped from the factory. While
its user data becomes inaccessible through the
device's user interface, research performed in 2018
reports that mobile forensic techniques can still
recover old data even after a smartphone undergoes
factory reset.
In recent smartphones, however, multiple security
measures are implemented by the vendors due to
growing concerns over security and privacy. The
implementation of encryption is especially supposed
to be effective for protecting user data from an
attacker after factory reset. In the meantime, its
impact on the digital forensics domain has not yet
been explored.
In this project, the effectiveness of factory reset
to digital forensics will be evaluated using modern
smartphones. Using the latest digital forensic
techniques, data remnants in factory reset
smartphones are investigated, and its applicability
to forensic domain will be evaluated.
Related research:
- https://calhoun.nps.edu/handle/10945/41441
- https://www.cl.cam.ac.uk/~rja14/Papers/fr_most15.pdf
- https://ld7un47f5ww196i744fd5pi1-wpengine.netdna-ssl.com/wp-content/uploads/2019/04/201811-SSpaper-DataRemanence.pdf
|
Zeno Geradts <zeno=>holmes.nl>
Aye Fukami <ayaf=>safeguardcyber.com>
|
|
|
71 |
Vocal Fakes.
Deep fakes are in the news, especially those where
real people are being copied. You see that really
good deepfakes use doubles and voice actors. Audio
deepfakes are not that good yet, and the available
tools are mainly trained on the English language.
> Voice clones can be used for good (for example,
for ALS patients), but also for evil, such as in CEO
fraud. It is important for the police to know the
latest state of affairs, on the one hand to combat
crime (think not only of fraud, but also of access
systems where the voice is used as biometric access
controls). But there are also applications where the
police can use voice cloning.
The central question is what the latest state of
technology is, specifically also for the Dutch
language, what the most important players are and
what are the starting points for recognizing it and…
to make a demo application with which the
possibilities can be demonstrated.
On the internet Corentin real time voice cloning is
promoted, with which you can create your own
voicecloning framework, so that you can also clone
other people's voices, this repository on Github was
open-sourced last year, as an implementation of this
research paper about a real-time working "vocoder".
Perhaps a good starting point?
|
Zeno Geradts <zeno=>holmes.nl> |
|
|
72 |
Web of Deepfakes.
According to the well-known magazine Wired, Text
Synthesis is at least as great a threat as
deepfakes. Thanks to a new language model, called
GPT-3, it has now become much easier to analyze
entered texts and generate variants and extensions
in large volumes. This can be used for guessing
passwords, automating social engineering and in many
forms of scams (friend-in-need fraud) and extortion.
It is therefore not expected that this will be used
to create incidents like deepfakes, but to create a
web of lies, disguised as regular conversations on
social media. This can also undermine the sincerity
of online debate. Europol also warns against text
synthesis because it allows the first steps of
phishing and fraud to be fully automated.
A lot of money is also invested in text synthesis
from marketing and services. For chatbots, but also
because you can tailor campaigns with the specific
language use of your target group. This technology
can also be used by criminals.
The central question is what the latest state of
affairs is, what the most important players are and
what are the starting points for recognizing text
synthesis in, for example, fraudulent emails /
chats, and for (soon) distinguishing real people
from chatbots. Perhaps interesting to build your own
example in slang or for another domain? |
Zeno Geradts <zeno=>holmes.nl> |
|
|
73 |
Comparing the
functionality of the state-of-the-art software
switches in a data sharing platform.
A container-based data-sharing platform is a dynamic
environment. There are various rules and policies in
this platform that may change at any time. In
addition, according to the network requirements
sometimes it is needed to change routing decisions
that are set between containers. Therefore, a
container-based data sharing platform has to be
programmable that can manage and reconfigure
container connections and filtering rules when it is
necessary. Currently available technologies for
managing the container’s connection cannot provide
the mentioned requirements.
Using a programmable switch can be the solution for
managing the container connections in a data-sharing
platform. However, there are multiple programmable
switches with different characteristics. Considering
security, agility in operation, performance, and
scalability as the main requirements, we need to
know which programmable switch suits such sharing
platform and can handle the requirements better.
Related references:
|
Sara Shakeri <s.shakeri=>uva.nl>
|
|
|
74 |
Zero Trust architectures applications in the University ICT environment.
Traditionally security in ICT is managed by creating zones where within
that zone everything is trusted to be secure and security is seen as
defending the inside from attacks originating from the outside. For that
purpose firewall's and intrusion detection systems are used. That model
is considered broken. One reason is that a significant part of the
security incidents are inside jobs with grave consequences. Another
reason is that even good willing insiders (employees) may inadvertently
become the source of an incident because of phishing or brute force
hacking. For organizations such as the university an additional problem
is that an ever changing population of students, (guest) researchers,
educators and staff with wildly varying functions and goals (education,
teaching, research and basic operations) put an enormous strain on the
security and integrity of the ICT at the university. A radical different
approach is to trust nothing and start from that viewpoint. This rp is
to create an overview of zero-trust literature and propose a feasible
approach & architecture that can work at the University scale of
about 40000 persons.
|
Roeland Reijers <r.reijers=>uva.nl>
Cees de Laat <C.T.A.M.deLaat=>uva.nl>
|
|
|
|