[TYPES/announce] Postdoc openings at University of Pennsylvania
Benjamin C. Pierce
bcpierce at cis.upenn.edu
Wed Jan 18 09:47:18 EST 2012
Applications are invited for postdoc positions with these projects at the University of Pennsylvania (full details below):
- CRASH/SAFE: clean-slate redesign of the HW/OS/PL stack
- MRC/SOUND: foundations for secure networking
- DP: Putting differential privacy to work
The positions are for one year in the first instance, with possible renewal for one or more additional years. Starting date is negotiable; salary commensurate with experience. Applications from women and members of other under-represented groups are particularly welcome.
Penn's department of Computer and Information Science offers a vibrant research environment with a long tradition of excellence in programming languages and related areas. We are located in Philadelphia, a city that offers a rich array of cultural, historical, and nightlife attractions, parks and outdoor recreation, convenient public transportation, and affordable housing.
To apply, please send a CV, research statement, and the names of four people who can be asked for letters of reference to Benjamin Pierce (bcpierce at cis.upenn.edu).
The SAFE project is part of CRASH, a DARPA-funded effort to design new computer systems that are highly resistant to cyber-attack, can adapt after a successful attack in order to continue rendering useful services, can learn from previous attacks how to guard against and cope with future attacks, and can repair themselves after attacks have succeeded. It offers a rare opportunity to rethink the hardware / OS / software stack from a completely clean slate, with no legacy constraints whatsoever.
Specifically, we aim to build a suite of modern operating system services that embodies and supports fundamental security principles—including separation of privilege, least privilege, and mutual suspicion—down to its very bones, without compromising performance. Achieving this goal demands an integrated effort focusing on (1) processor architectures, (2) operating systems, (3) formal methods, and (4) programming languages and compilers -- coupled with a co-design methodology in which all critical system layers are designed together, with a ruthless insistence on simplicity, security, and verifiability at every level.
The ideal candidate will have a Ph.D. in Computer Science, a combination of strong theoretical and practical interests, and expertise in two or more of the following areas: programming languages, security, formal verification, operating systems, and hardware design.
This project is joint with Harvard, Northeastern, and BAE Systems.
The goal of the SOUND project is to design a distributed system that can offer cloud-style services but is highly resilient to cyber-attacks. Rather than focusing on specific known attacks, we would like to provide resiliency against a broad range of known and unknown (Byzantine) attacks; for instance, an adversary could compromise a certain number of nodes and modify them in some arbitrary way. Our goal is to detect and mitigate such attacks whenever possible, e.g., by reconfiguring the system to exclude any compromised nodes.
We approach this problem using the principle of mutual suspicion: Nodes continually monitor each other and check for unusual actions or changes in behavior that could be related to an attack. However, since we are assuming a very strong adversary, the bar for a successful solution is high: We require a strong, provable guarantee that the adversary cannot circumvent the system, as well as a practical design that can efficiently provide this guarantee. We expect that the SOUND project will build on results from the CRASH effort at the level of individual nodes; however, SOUND goes beyond CRASH by considering an entire distributed system with a heterogeneous mix of nodes, many of which may not be operating in a secure environment.
The ideal candidate will have a Ph.D. in Computer Science, a combination of theoretical interests and strong system-building skills, as well as expertise in two or more of the following areas: distributed systems, programming languages, networking, and computer security.
This project is joint with Portland State University and BAE Systems.
Putting Differential Privacy to Work
A wealth of data about individuals is constantly accumulating in various databases in the form of medical records, social network graphs, mobility traces in cellular networks, search logs, and movie ratings, to name only a few. There are many valuable uses for such datasets, but it is difficult to realize these uses while protecting privacy. Even when data collectors try to protect the privacy of their customers by releasing anonymized or aggregated data, this data often reveals much more information than intended. To reliably prevent such privacy violations, we need to replace the current ad-hoc solutions with a principled data release mechanism that offers strong, provable privacy guarantees. Recent research on DIFFERENTIAL PRIVACY has brought us a big step closer to achieving this goal. Differential privacy allows us to reason formally about what an adversary could learn from released data, while avoiding the need for many assumptions (e.g. about what an adversary might
already know), the failure of which have been the cause of privacy violations in the past. However, despite its great promise, differential privacy is still rarely used in practice. Proving that a given computation can be performed in a differentially private way requires substantial manual effort by experts in the field, which prevents it from scaling in practice.
This project aims to put differential privacy to work---to build a system that supports differentially private data analysis, can be used by the average programmer, and is general enough to be used in a wide variety of applications. Such a system could be used pervasively and make strong privacy guarantees a standard feature wherever sensitive data is being released or analyzed. Specific contributions will include ENRICHING THE FUNDAMENTAL MODEL OF DIFFERENTIAL PRIVACY to address practical issues such as data with inherent correlations, increased accuracy, privacy of functions, or privacy for streaming data; DEVELOPING A DIFFERENTIALLY PRIVATE PROGRAMMING LANGUAGE, along with a compiler that can automatically prove programs in this language to be differentially private, and a runtime system that is hardened against side-channel attacks; and SHOWING HOW TO APPLY DIFFERENTIAL PRIVACY IN A DISTRIBUTED SETTING in which the private data is spread across many databases in differ
ent administrative domains, with possible overlaps, heterogeneous schemata, and different expectations of privacy. The long-term goal is to combine ideas from differential privacy, programming languages, and distributed systems to make data analysis techniques with strong, provable privacy guarantees practical for general use. The themes of differential privacy are also being integrated into Penn's new undergraduate curriculum on Market and Social Systems Engineering.
The ideal candidate for this position will have a Ph.D. in Computer Science, a combination of strong theoretical and practical interests, and expertise in at least two of: programming languages, theoretical computer science, and systems software.
More information about the Types-announce