I am looking for highly motivated students to undertake research in parallel programming models, resilience, and numerical computing. If you are interested in studying with me, please begin with the following steps (stolen shamelessly from Prof. Steve Blackburn):

- Make sure you understand the degree program you’re interested in.
- If you are interested in a
**PhD**or**MPhil**, please start by reading our College’s student Web page. - If you are an ANU student interested in
**Honours**, please understand the requirements for Honours entry.

- If you are interested in a
**Before you do anything more,**please look at my publications to get a sense of my research directions. You may also consider the student projects described below. Note that these projects are only indicative and are not and never will be exhaustive—in practice, most projects emerge from a discussion with myself and the student involved…- If you’re still interested:
- If you’re interested in a
**PhD**or**MPhil**, please complete our College’s pre-application process including self-assessment for graduate study, and email**me**(*with the words “PhD/MPhil Enquiry: ” at the start of the subject line)*, as well as mailing the address indicated on our College web page. - If you’re an ANU student interested in
**Honours**, please email**me***(with the words “Honours Enquiry: ” at the start of the subject line)*and we’ll set up a time to talk.

- If you’re interested in a

Australian citizens who undertake this project may be eligible for a generous stipend. Please contact to find out more.

This project is one stream in a larger effort that seeks to leverage distributed compute infrastructure to drastically improve the runtime efficiency and scalability of Boolean SAT(isfiability) solving, an in particular SAT solving for transition system models.

Our first objective is to develop and showcase novel parallel approaches that are demonstrated, both theoretically and empirically, to be convincingly faster (walltime) than state-of-the-art serial procedures. Our second objective is to develop and showcase novel parallel approaches that are able to efficiently solve problems whose *Conjunctive Normal Form* (CNF) is very large — e.g., billions of clauses.

We are guided by bottlenecks in SAT solving for the following application domains:

- Program analysis via symbolic execution; e.g. [1]
- The cryptanalysis of hash functions; e.g. [2]
- Analysis of security properties, and in particular privacy properties; e.g. [3]

We seek to advance SAT solving system technology, with the concrete objectives being to create a SAT system that:

1. Is convincingly faster (walltime) compared to existing monolithic serial systems – i.e., ideally we show an exponential separation, however at the very least a polynomial speedup factor.

2. Scales gracefully to problems where the given CNF representation is very large – e.g., billions of clauses.

Our application focus shall be domains where the underlying decision problems are most naturally modelled as state transition systems. We expect to achieve our objectives through careful scientific investigation of high-performance computing technologies for the efficient and targeted discovery and communication of information between the processing elements in a distributed search.

There are two paradigms for solving Boolean SAT problems using serial processing: (i) stochastic local search (SLS), or otherwise (ii) a backtracking search, such as conflict-driven clause learning (CDCL). Both have their merits, and we note that the latter is recently considered to be best suited, in isolation, to solving transition system problems. Although it is somewhat debatable, most sane researchers would say these techniques have yet to be coupled nicely.

Parallel computation has found a number of uses in solving SAT problems. Parallelism can be exploited in: (i) the tuning of algorithm configurations, (ii) search-space splitting, and (iii) portfolio

approaches. Identifying and sharing good constraints is key to efficient parallelism. We note that bottlenecks have been identified in the structure of resolution refutations which place hard limits on parallelism in search-space splitting for CDCL solvers. Finally, once we start solving bounded model checking (BMC) problems–as we expect to do in this project–parallelisation is raised as a topic when considering the search horizon at which to pose queries.

A fundamental process in any state-of-the-art symbolic execution tool is the translation/compilation of queries to decision problems, and then a call to a satisfiability/decision procedure which is used to resolve the query. Mayhem and angr use the SAT (Modulo Theory) system Z3. S2E supports a number of solvers, including Z3, that is to our knowledge used most frequently for industry-strength solving. We note that the open source (MIT License) Z3 tool itself implements a bespoke serial Boolean satisfiability procedure to solve hard decision problems. The focus of this project shall be on developing efficient parallelisation strategies, for variable selection, value selection, and constraint propagation in the decision engine that underlies an effective symbolic execution toolchain.

[1] Baldoni, R., Coppa, E., D’Elia, D. C., Demetrescu, C., & Finocchi, I. (2018). A survey of symbolic execution techniques. ACM Computing Surveys, 51(3)

[2] Mironov, I., & Zhang, L. (2006). Applications of SAT solvers to cryptanalysis of hash functions. IACR Cryptology ePrint Archive.

[3] Cortier, V., Dallon, A., & Delaune, S. (2017). SAT-equiv: An efficient tool for equivalence properties. CSF 2017 (pp. 481–494).

Theme: Systems

Area: High Performance Computing

Eligible degree types: Honours, Graduate, Masters coursework & research, Higher degree by research

Area: High Performance Computing

Eligible degree types: Honours, Graduate, Masters coursework & research, Higher degree by research

Numerical computing is a fundamental tool for scientific and engineering applications – but can we really trust the results? Typical numerical computations incorporate error from multiple sources: input uncertainty, sampling (discretization) and roundoff. In a floating point computation, failing to account for sampling and roundoff error may generate results that are apparently exact but are actually wrong. Trusting these results could lead to confusion, or catastrophe! [1]

Unum computing [2] promises to eliminate sampling and roundoff error by operating on variable precision intervals called uboxes, which are guaranteed to bound correct results. Unums also promise reduced cost compared to existing floating-point number formats as they require on average fewer bits to store each number, potentially saving both memory and bandwidth.

Despite these great promises, unum computing has yet to be proven for significant scientific applications. This may be due in part to a lack of programming model and library support, and also to limitations in current understanding of suitable numerical methods and their associated costs. Unum computing has also been criticized as impractical or even dangerous for incurring excessively greater costs for certain computations compared to floating-point, and for discounting the need for traditional error analysis. [3]

In this project, you will critically evaluate unum methods through practical applications in physical simulation. You will contribute to current understanding of unum computing (both benefits and limitations), and so help drive an exciting new area of numerical research.

Project Goals:

- Evaluate the flexibility and performance of unum computing for physical simulation
- Enhance support for unum computing in a major numerical programming environment

Requirements:

- Sound programming skills
- Strong interest in numerical computing, applied mathematics and/or physics

Background Reading:

[1] Douglas Arnold “The Patriot Missile Failure”.

[2] John L. Gustafson (2016) “The End of Error – Unum Computing”. CRC Press

[3] William Kahan (2016) “Commentary on ‘The End of Error – Unum Computing'”.

[1] Douglas Arnold “The Patriot Missile Failure”.

[2] John L. Gustafson (2016) “The End of Error – Unum Computing”. CRC Press

[3] William Kahan (2016) “Commentary on ‘The End of Error – Unum Computing'”.

Theme: Systems

Area: High Performance Computing, Programming Languages, Design and Implementation

Eligible degree types: Honours

Area: High Performance Computing, Programming Languages, Design and Implementation

Eligible degree types: Honours

Computer simulation is fundamental to many areas of science including chemistry, materials science and fluid dynamics. Computer experiments can be conducted over a wider range of conditions and at scales (both very small and very large) that would be impractical for a physical experiment. As the complexity of scientific applications increases, programming models must evolve to support productive development of codes that fully exploit the parallelism available on modern computer architectures.

Proxy applications are used by domain scientists to communicate requirements to hardware architects and system software developers [1]. A proxy app provides a critical tool to evaluate parallel programming models for high performance scientific computing, by capturing the essence of a complete scientific code in a small specification that is flexible enough to permit varied and creative implementations.

Julia is a modern language designed for productive development of high performance numerical codes [2]. As well as generating efficient, vectorized code, Julia supports both multithreaded and distributed multi-task parallelism. In this project, you will use one or more scientific proxy apps to evaluate and extend the parallel programming model in the Julia language.

Project Goals:

- Evaluate the Julia parallel programming model using one or more scientific proxy applications
- Improve support for high-performance computing in Julia

Requirements:

- Strong programming skills
- Interest in numerical computing, applied mathematics and/or physics

Background Reading:

[1] Exascale Computing Project “Proxy Apps”

[2] Julia Language

[3] ExMatEx: LULESH Proxy Application

[4] Karlin et al. (2012) “Exploring Traditional and Emerging Parallel Programming Models using a Proxy Application”

[1] Exascale Computing Project “Proxy Apps”

[2] Julia Language

[3] ExMatEx: LULESH Proxy Application

[4] Karlin et al. (2012) “Exploring Traditional and Emerging Parallel Programming Models using a Proxy Application”