Write a program that automatically decrypts ciphertexts


Research Project:

The purpose of the research project is to give you an opportunity to inves- tigate some cryptographic topic of interest in depth. The project can take the form of a paper or a computer program. A paper should be around 10-12 pages in length (single-spaced), and can be primarily expository (e.g., an explanation of elliptic curve cryptosystems), or analytical (e.g., a comparison of alternative proposals for NIST's new cryptographic hash function standard.)

I am looking for some technical depth in the project, rather than a high- level survey. For example, one of the projects listed below involves comparing Keccak, the newly selected SHA-3 hash function standard, with one or more of the other finalists in the SHA-3 competition. By searching online, you can easily put together a ten-page paper that provides a high-level description of the various finalists ("this is what Keccak does"; "this is what Blake does"; and so on). This is not the sort of project I'm looking for. Instead, a much more interesting and successful project would be to look at Keccak and one of the other finalists in depth. You would of course present the design of the two hash functions. But then you would investigate some deeper questions: Why are the given hash functions designed that way? What attacks do these design features seek to prevent? What are the relative advantages of the design decisions on which the two algorithms are based?

Sample topics are listed below. These are just suggestions; you may certainly choose a topic that does not appear on the list. If you do so, however, please check with me to make sure that the topic is suitable. For some but not all of the topics, I explain in more detail what I mean by"technical depth". Please feel free to discuss this issue with me as it pertains to your own project.

1. Substitution Ciphers Write a program that automatically decrypts ci- phertexts that have been created using the kind of monoalphabetic substitution illustrated by the first problem set. This is an interesting project, but it is also much harder than you might expect.

Your program should work on inputs that don't include spaces (i.e., the input should be formatted like the examples on the first problem set). A program that relies on word spacing is easier to write, but isn't nearly as interesting: no one is going to include spaces in their ciphertexts in order to make cryptanalysis easier!

You can find research papers on this topic, and you may (but don't have to) take one of the approaches discussed in those papers as your starting point. If you do so, you should plan to modify the given approach in some interesting way. Whether or not you take some existing approach as your starting point, you might want to experiment with alternative approaches.

Your project should consist of your source code, together with a short paper that describes your algorithm (or algorithms, if you experiment with alterna- tives), together with some sample results. The results should include the two examples on the first problem set, together with some other test cases. If you decide to pursue this project, please see me to discuss specific performance requirements.

2. Differential and Linear Cryptanalysis Differential and linear crypt- analysis are general-purpose attacks that play a central role in the design of block ciphers. Indeed, provable resistance to these attacks has become a fun- damental design criterion for block ciphers; no one will take your new block cipher seriously unless you can provide such a proof. In class we will discuss some of the basic features of differential cryptanalysis. However, we will omit many of the details, and we will not cover the more powerful technique of linear cryptanalysis.

The central role of differential cryptanalysis is illustrated by the following episode. Until AES came along, the most popular and most highly scrutinized iterated block cipher was DES. For many years, however, the design of the DES components-especially the S-boxes-lay shrouded in mystery, mainly because the design rationale was classified! (I.e., the definition of the S-boxes was made public from the beginning; but the reasons for designing the S-boxes in that particular way has never been made public.) So researchers studied the DES S-boxes in great detail, and from all angles, but no one other than the designers themselves knew for sure the reasons for various design decisions.

Then in 1994 Don Coppersmith, one of the prominent members of the IBM/NSA DES design team, published a paper in which he explained that the design of the DES S- boxes was driven in large part by a desire to make DES resistant to differential cryptanalysis. Now the first published papers on differential cryptanalysis didn't appear until 1990; but Coppersmith reports that his group was aware of this technique already in 1974! They didn't publish the technique because of security considerations. Once the technique had been discovered and reported in the open literature, Coppersmith was able to report on this aspect of the rationale behind the DES S-box design. Of course, as Schneier observes in his Applied Cryptography book, this episode raises the question of whether there are some other powerful cryptanalytic techniques that are still unknown to the public, and which also drove the design of the S-boxes.

I list here two kinds of projects that pertain to differential (or linear) crypt- analysis. If there is some other sort of project you'd like to pursue under this general heading, please feel free to discuss it with me.

2.1 Implementation of Differential Cryptanalysis As I envision it, you would focus specifically on the differential cryptanalysis of DES. In the first edition of his book, Stinson gives a detailed presentation of the differential cryptanalysis of six-round DES. Implementing this attack makes for a really interesting project. Stinson's first edition is long out of print, but you can copy the pertinent pages from my copy of the book. An alternative approach to this project is to implement the attack that Stinson describes in section 3.4 of the current edition, which pertains to the substitution- permutation network that he presents in section 3.2.

Resistance to Differential or Linear Cryptanalysis How does one design a block cipher that is resistant to differential and linear cryptanalysis? As indicated above, anyone who designs a block cipher must come to grips with this question. A project on this topic would focus on some specific aspect of the general problem (otherwise you'd find yourself writing a book). You could focus either on differential or linear cryptanalysis. And you could focus either on Feistel ciphers or Substitution-Permutation (SP) networks (or, more specifically, either on DES or AES). In any case, you would write a tutorial paper that explains how, for example, to design a Feistel cipher that is provably resistant to differential cryptanalysis. This project is challenging (i.e., hard), but leads to a deep understanding of block cipher design.

Some references:

i. https://www.ciphersbyritter.com/RES/SBOXDESN.HTM
ii. A. Webster and S.E. Tavares, "On the Design of S-Boxes"
iii. N. Courtois, G. Castagnos, L. Goubin, "What do DES S-boxes Say to Each Other?"
iv. L. Knudsen, "Practically Secure Feistel Ciphers"
v. K. Nyberg and L. Knudsen, "Provable Security Against a Differential Attack"
vi. S. Park et al., "On the Security of Rijndael-like Structures Against Differential and Linear Cryptanalysis"
vii. S. Hong et al., "Provable Security Against Differential and Linear Cryptanalysis for the SPN structure"
viii. Heys and Tavares, "Substitution-Permutation Networks Resistant to Differential and Linear Cryptanalysis"

3. Algebraic Cryptanalysis Algebraic cryptanalysis involves representing a block cipher as a system of multivariate equations, and then solving the equa- tions to recover the key. As we will discuss in class, algebraic cryptanalysis is a relatively new idea that has gained considerable impetus from the fact that AES, in contrast to earlier block ciphers, is defined algebraically. While AES itself has so far proved resistant to algebraic attacks, research on algebraic cryptanalysis has continued. One result of this research was the discovery in 2007 of the first algebraic attack that breaks a full-round real-life block cipher. The cipher in question is KeeLoq, which is used by a number of major car manufacturers in wireless devices that unlock doors and alarms. While algebraic cryptanalysis can be a mathematically difficult topic (for example, take a look at papers that discuss the proposed XSL attack on AES), the successful cryptanalysis of Keeloq is mathematically much more manageable. Writing an expository paper on this attack would be a good way of learning about algebraic cryptanalysis, which is likely to become an increasingly important topic. And it's always instructive to learn about the bad things that can happen in the real world to seemingly secure ciphers.

i. Gregory Bard, Algebraic Cryptanalysis, Springer 2009 (chapters 2 and 3)
ii. N. Courtois, G. Bard, D. Wagner, "Algebraic and Slide Attacks on KeeLoq"

4. What is a random sequence? A random sequence is one that "looks" random. For example,

looks pretty random;

110010101100001010010

101010101010101010101

does not. But we need to characterize the notion of randomness more precisely in order to evaluate (pseudo) random number generators. What I have in mind for this project is a paper that investigates randomness from a logical/philosophical viewpoint: how do we characterize randomness formally? A good starting point for such a paper is provided by the book by Fine, listed below. This book is out of print and almost impossible to find, but I have a copy that you can borrow.

i. T. Fine, Theories of Probability, Academic Press, 1973
ii. D. Knuth, The Art of Computer Programming, volume 2: Seminumerical Algorithms, section 3.5

5. Cryptographically secure pseudorandom number generators For cryptographic applications, the requirements on a random number generator are more stringent than they are for simulation. In particular, it is not enough, for cryptographic purposes, for a PRNG to satisfy the standard statistical tests for randomness. For example, you can use a linear congruential generator (LCG) for simulation if you're careful, but such a generator is not adequate for cryp- tographic purposes. A pseudorandom generator is cryptographically secure if, even when a cryptanalyst has obtained a long segment of its output, it is in- feasible for her to infer a new segment of its output. For this topic, you could write a paper that explains the workings of a cryptographically secure generator such as the Blum-Blum-Shub generator, and presents arguments for its security. Alternatively, you could write an expository paper about cryptanalytic attacks against an insecure generator such as the LCG. A third option would be to implement such an attack.

i. J. Boyar, "Inferring Sequences Produced by Pseudo-Random Number Generators", Journal of the ACM, 1989

ii. J. Boyar, "Inferring Sequences Produced by a Linear Congruential Gen- erator Missing Low-Order Bits", Journal of Cryptology, 1989
iii. P. Garrett, Making, Breaking Codes, chapter 21
iv. J. Kelsey et al., "Cryptanalytic Attacks on Pseudorandom Number Gen- erators"

6. Hash Functions and the SHA-3 Competition Collision attacks against MD5 and SHA-1 were discovered in 2004-2005, and caused caused considerable turmoil in the cryptographic community. In addition to showing that two of the most popular hash functions were insecure (in a sense that we will discuss in class), these attacks called into question the viability of the iterative con- struction paradigm on which most hash function families were based. Thus, while SHA-2 is still believed to be secure (it remains part of the NIST-approved standard), the fact that it is constructed according to the same paradigm as a number of hash functions that have been broken makes cryptographers uneasy. This uneasiness concerning the foundations of hash function construction led NIST to sponsor a competition for a new hash function standard, constructed according to some new principles. The five-year competition culminated in October 2012 with the selection of Keccak as the new SHA-3 hash function standard.

This is a very current topic, which is of significant theoretical and practical interest. I list here three possible hash-related projects. These are different projects; I don't expect you to do more than one of them. As usual, it will be fine to modify or combine any of these suggestions.

Meaningful Hash Collisions. The idea of a "meaningful" hash colli- sion arises in the following way. The original collision attacks proceeded by finding random pairs of inputs that have the same hash values. The existence of such attacks shows that the hash functions in question are, from a theoretical standpoint, fatally ?awed. However, for most practi- cal applications of collision attacks, the inputs must be meaningful rather than random. We'll discuss this issue in class; meanwhile, the first refer- ence below provides a nice informal explanation.

A paper on this topic would provide an in-depth look at possible practical implications of the recently discovered hash function collisions.
i. "The Poisoned Message Attack":
https://th.informatik.uni-mannheim.de/People/lucks/HashCollisions/
ii. A. Lenstra and B. de Weger, "On the Possibility of Constructing Meaningful Hash Collisions for Public Keys"
iii. "MD5 Considered Harmful Today":
https://www.win.tue.nl/hashclash/rogue-ca/
Explain in detail how one of the collision attacks works. This is a very interesting but also challenging (euphemism for "hard") topic.

i. F. Chabaud and A. Joux, "Differential Collisions in SHA-0"
ii. J. Liang, "Improved Collision Attack on Hash Function MD5"
iii. M. Stevens, "Fast Collision Attack on MD5"
iv. X. Wang, "Finding Collisions in the Full SHA-1"
v. De Canniere and Rechberger, "Finding SHA-1 Characteristics: Gen- eral Results and Applications"

The NIST SHA-3 Competition NIST announced the competition for a new hash function standard in November 2007. In July 2009 it announced fourteen second-round candidates, and in December 2010 it announced the five finalists: Blake, Grostl, JH, Keccak, and Skein. As mentioned, Keccak was named the new SHA-3 algorithm in October 2012.

A paper on this topic would critically compare Keccak with one or more (but probably one) of the other finalists. This is an interesting topic, which provides a good way of learning about some very current developments in cryptography. But please be careful if you choose this topic. It's easy to write a paper that describes at a high level how Keccak works, then describes at a high level how (say) Blake works. This kind of high level description would of course be part of your paper. But I'm also asking you look more deeply at the design decisions made by the algorithms you consider. What motivates those design decisions? Are there tradeoffs between security and performance? What are the possible advantages of one algorithm's design choices over the other's? So what I'm looking for is not just a description of how the algorithms work, but a discussion of why they were designed that way.

Instead of comparing Keccak with one of the other SHA-3 finalists, you could instead compare it to SHA-2 (the guidelines from the previous paragraph apply to this version of the project as well).

i. NIST cryptographic hash function site:
https://csrc.nist.gov/groups/ST/hash/sha-3/
ii. The SHA-3 zoo:
https://ehash.iaik.tugraz.at/wiki/The SHA-3 Zoo
iii. Keccak:
https://keccak.noekeon.org/

7. Knapsack Cryptosystems As we will discuss in class, the Merkle-Hellman knapsack cryptosystem is intuitively very appealing. The only problem is that it has been broken! The purpose of this project is to extend our class discussion by actually working through the details of the attack. You could either implement the attack, or write a paper that explains how the attack works. A good overview is provided by the following paper:

A.M. Odlyzko, "The Rise and Fall of Knapsack Cryptosystems"

While the details of the attack are very technical, they are presented in an accessible way by Arto Salomaa, in his book Public-Key Cryptography. If you can't find the book in a library, and don't feel like buying it for a small fortune, you can copy the pertinent chapter from my own copy of the book.

8. Side Channel Attacks Paul Kocher introduced "timing attacks" in the mid-1990's. These attacks are based on the observation that cryptosystems often take different amounts of time to process different inputs. For example, as we will see, RSA decryption uses modular exponentiation, where the exponent is the private key. And, as we will also see, the amount of time required to perform modular exponentiation depends on the number of bits in the exponent that are set to 1. Kocher figured out how to turn this observation into an actual attack. Some time later, Kocher introduced the idea of a "power analysis" attack, which focuses on the actual physical implementation of a cryptosystem. Kocher showed that power consumption measurements collected during cryptographic
operations can also provide the basis for an attack.

Timing and power analysis attacks, along with their variations, are referred to collectively as "side channel" attacks: they exploit a channel of information other than (or in addition to) plaintext and ciphertext.

A paper on this topic would focus on a particular type of side channel attack. It would explain in detail how the attack works in practice, and would also describe possible countermeasures.

Kocher's original paper, "Timing Attacks on Implementations of Diffie- Hellman, RSA, DSS, and Other Systems", can be found here:
https://www.cryptography.com/resources/whitepapers/TimingAttacks.pdf
Some additional useful references:
i. J. Kelsey et al., "Side Channel Cryptanalysis of Product Ciphers"
ii. P. Kocher, J. Jaffe, B. Jun, "Differential Power Analysis"
iii. D. Brumley and D. Boneh, "Remote Timing Attacks are Practical"

9. Side Channel Attacks on AES In April 2005, Daniel Bernstein announced a successful attack against AES that exploits the timing characteristics of table lookups:
D. Bernstein, "Cache-timing Attacks on AES"
Later in 2005, Osvik, Shamir, and Tromer announced new and more powerful cache timing attacks on AES:
"Cache Attacks and Countermeasures: the Case of AES"

You can easily find more recent papers in the same vein. These attacks came as a surprise. In fact, in its final evaluation in 2000, NIST had declared that Rijndael (the algorithm that provides the basis for AES) is not vulnerable to timing attacks!
An entire session at the 2006 RSA conference was devoted to the problem of implementing AES in a way that is both efficient and resistant to timing attacks.
A paper on this topic would provide an overview of one or more of these attacks, together with possible countermeasures. You don't have to confine your attention to the references I've listed here; more recent attacks have been discovered, and you could focus on one of these if you prefer.
10. Homomorphic Encryption (with applications to Secure Electronic Voting) Certain public key algorithms satisfy a "homomorphic property", a mathematical property that we will discuss in class in connection with RSA. In the past few years, cryptographers have developed "fully homomorphic encryp- tion schemes", with significant potential implications for a variety of security applications. One such application involves the design of secure electronic vot- ing schemes. What I have in mind for this project is a paper that explores the applications of homomorphic encryption to electronic voting.
However, you should feel free to develop your own variant of the project: you might consider applications of homomorphic encryption other than electronic voting; or you might explore cryptographic techniques for electronic voting other than (or perhaps in addition to) homomorphic encryption. In any case, please keep in mind that for the project I'm looking for depth rather than breadth; I'm not interested in a survey paper that (say) devotes two pages to each of six different techniques.
Here are a couple of good, informal introductions to homomorphic encryption:
Brian Hayes, "Alice and Bob in Cipherspace", American Scientist, 2012
Craig Stuntz, "What is Homomorphic Encryption, and Why Should I Care?"
Here is a starting point for cryptographic approaches to electronic voting (you can easily find many more):
H. Lipmaa, "Secure Electronic Voting Protocols"

11. Formal Methods for Cryptographic Protocol Analysis A cryptographic protocol consists of an exchange of messages between two or more principals in order to achieve some cryptographic goal, such as authentication (you want to establish that the person with whom you're exchanging messages really is Alice.) Such protocols typically make use of encryption, digital signature, and other cryptographic algorithms. However, the construction of a secure protocol turns out to be a subtle and tricky business. In particular, it's possible for a protocol to be vulnerable to attack, even if the cryptographic algorithms it uses are perfectly secure. Schneier provides an introduction to these issues in chapter 3 of Applied Cryptography.

Because cryptographic protocols are both very important and very difficult to get right, the use of formal methods for analyzing their correctness has be- come an active research area. "Formal methods", in this context, refers to logic, theorem proving, model checking, and other "mathematical" techniques. Wenbo Mao provides an excellent overview in chapter 17 of Modern Cryptogra- phy. Many additional references are available on line. Here is a small sample:

i. C. Meadows, "Formal Methods for Cryptographic Protocol Analysis"

ii. S. Older and S. Chin, "Formal Methods for Assuring Security of Protocols"

iii. C. Cremers, S. Mauw, E. de Vink, "Formal Methods for Security Proto- cols: Three Examples of the Black-Box Approach"

iv. John Mitchell slides: theory.stanford.edu/∼jcm/slides/jcm-msr-rta-02.ppt

v. M. Burrows, M. Abadi, R. Needham, "A Logic of Authentication"

One possible project on this topic could consist of an expository paper that looks in depth at one formal approach for protocol analysis, such as BAN logic. In addition to explaining the approach, such a paper should consider its strengths and limitations. A second kind of paper would compare two or more approaches (keeping in mind that I'm looking for depth rather than breadth).
An alternative to the paper would be to implement and experiment with some protocol verification strategy. A useful reference for this alternative is

Peter Ryan et al., Modelling and Analysis of Security Protocols, Addison- Wesley 2000

This book shows how one can use the Casper tool to transform a high-level description of a protocol and its desired security properties into a form that can be analyzed automatically by a particular model checker (the FDR model checker). It is an interesting project to use this system to find the known ?aws in some nontrivial protocols.

12. Quadratic or Number Field Sieve for factoring large integers The quadratic sieve method for factoring large integers was developed in the early 1980's. At that time, it was the fastest method for factoring integers n that have no prime factor of order of magnitude significantly less than √n. But the number field sieve, which came along ten years later, is even faster. A project on this topic could take the form of a paper that explains and compares these methods; or it could take the form of a shorter explanatory paper together with an implementation of one or both methods. Some references:
i. N. Koblitz, A Course on Number Theory and Cryptography, chapter 5

ii. C. Pomerance, "A Tale of Two Sieves,"
iii. C. Pomerance, Cryptology and Computational Number Theory, American Mathematical Society, 1990

13. Elliptic Curve Cryptosystems Elliptic curve public key cryptosystems have become increasingly popular. Their popularity arises from the fact that they can provide the same level of security as RSA and other, more traditional public key cryptosystems, with a much smaller key size. (For any security claim of this sort, there's always an implicit "As far as we know". Very few if any of these claims have actually been proven.) A paper on this topic would provide an overview of elliptic curve cryptosystems, as well as consider practical implemen- tation issues. (I'll present an introduction to ECC in class; your paper would consider the topic in more depth.) One fundamental issue is that of choosing an appropriate elliptic curve. This issue arises in the following way. The security of an elliptic curve cryptosystem is based on the difficulty of solving a discrete logarithm problem in a group of points on the elliptic curve. (We'll discuss the discrete logarithm problem in class.) However, solving this problem on an elliptic curve is difficult only if you have chosen the curve correctly. References:

i. Stinson, chapter 6
ii. P. Garrett, Making, Breaking Codes, chapter 28
iii. N. Koblitz, A Course on Number Theory and Cryptography, chapter 6
iv. D. Hankerson, A. Menezes, S. Vanstone, Guide to Elliptic Curve Cryptog- raphy

14. Secret sharing Imagine a bank in which none of the five managers trusts any of the others individually with the combination to the vault. One strategy for dealing with this situation is to divide the vault combination amongst the managers in such a way that any three of them together can open the vault, but fewer than three cannot. (The "auto-destruct sequence" on Star Trek worked something like this.) The cryptographic version of this idea is secret sharing. Using a secret sharing scheme, one can divide a secret key into pieces and distribute the pieces to different people in such a way that only certain subsets of the people can get together to recover the key.

A paper on this topic would consider technical details as well as applications of secret sharing schemes. The wikipedia article on secret sharing provides a good starting point for this topic. See also chapter 13 of Stinson's book.
i. H. Krawczyk, "Secret Sharing Made Short"
ii. M. Bellare and P. Rogaway, "Robust Computational Secret Sharing"
Threshold cryptography is an extension to secret sharing, in which various cryptographic operations are shared:

iii. MIT page: https://groups.csail.mit.edu/cis/cis-threshold.html
iv. T. Wu, M. Malkin, D. Boneh, "Building Intrusion Tolerant Applications"

15. Secure Information Dispersal This topic is closely related to the previ- ous one; but the focus here is on integrity and availability of information, rather than on confidentiality. More specifically, the focus is on the integrity protec- tion of information in distributed systems. The third reference below discusses applications. A paper could (but need not) combine this topic with the previous topic.
i. H. Krawczyk, "Distributed Fingerprints and Secure Information Disper- sal"
ii. C. Cachin, "Asynchronous Verifiable Information Dispersal"
iii. J. Wylie, "Survivable Information Storage Systems"

16. Primality Testing In August 2002, Manindra Agrawal and two of his students announced their discovery of a deterministic polynomial time algorithm for determining whether a given integer is prime. This algorithm is now known as the "AKS primality test". See the following "Primes is in P" page for details and lots of references:
https://fatphil.org/maths/AKS/

The AKS algorithm is an exceptionally interesting result from the standpoint of complexity theory. But the traditional probabilistic Miller-Rabin algorithm is still both faster and easier to implement.

It would be an interesting project to investigate the comparative perfor- mance of AKS vs. Miller-Rabin. You could proceed either by writing a paper that provides an overview and analysis, or by implementing the algorithms your- self, and comparing their performance directly. For your AKS implementation, you would want to consider various strategies for improving the algorithm's performance. You can easily find many references on this topic, such as the following:

R. Salembier and P. Southerington, "An Implementation of the AKS Pri- mality Test"

17. Generating large primes of a specified bitlength for RSA Imple- ment the Miller-Rabic probabilistic algorithm for testing whether a given integer is prime (we'll discuss this algorithm in class). Use this algorithm as the basis for a procedure for generating large primes. Then extend the procedure to one that generates "strong primes".

The idea of strong primes arises in the following way. In RSA one uses a modulus n = pq, where p and q are distinct odd primes. However, in order to en- sure the security of RSA against cryptanalytic attacks based on known factoring algorithms, it has been argued that p and q should satisfy some additional con- straints. A prime that satisfies these additional constraints is a "strong prime". See Handbook of Applied Cryptography, pp. 149-150, for details.

In applications, one must often generate primes of a specified, exact bitlength. A prime with a smaller bitlength leads to decreased security; a prime with a larger bitlength will cause problems for a protocol that expects a particular bitlength. So I am asking you for this topic to implement an algorithm that generates (probable) primes of a specific given bitlength. A hint on how to ac- complish this appears in section 4.54(ii) of the Handbook. It's OK if the exact bitlength feature of your algorithm doesn't work in all cases.

18. Quantum Cryptography As we will discuss in great detail, the security of cryptographic algorithms such as RSA is based on the presumed intractability of various computational problems, such as integer factoring. Quantum cryp- tography provides a new paradigm: the security of a quantum cryptographic system is based on physical law. Most work in the area of quantum crypto has focused on "quantum key distribution" (QKD).

A paper on this topic would start by providing an overview of QKD protocols. But I would also like you to look in detail at one (not both!) of the issues listed below.

The most accessible introduction to quantum cryptography that I know of is contained in chapters 2 and 3 of the following text:

Susan Loepp and William Wootters, Protecting Information: From Clas- sical Error Correction to Quantum Cryptography

This book provides a good general introduction, as well as some material that pertains specifically to the second topic below.

Authentication A crucial aspect of any key establishment protocol, quantum or classical, is authentication. We are considering a scenario where Bob and Alice establish a shared key by carrying out some protocol. But how does Bob know that he is really establishing the key with Alice, and not some malicious intruder? Authentication schemes are used to ensure that Bob and Alice really are communicating with each other. As you would expect, public key protocols use authentication schemes that are based on public key techniques. QKD can't use these techniques; if they did, the security of the overall protocol would reduce to the security of the public key techniques, and there would no longer be any advantage in using the quantum approach. Instead, QKD typically uses something called the Wegman-Carter authentication protocol which, like the one-time pad, features information theoretic rather than computational security.

One possible approach to writing a paper on this topic would be to explain the Wegman-Carter protocol, and how it works in conjunction with QKD. You might also want to take into account a recent paper that argues that the standard way of incorporating authentication into QKD protocols is ?awed, as well as a suggestion by the authors about how to fix this problem:

Cederl¨of and Larsson "Security Aspects of the Authentication Used in Quantum Cryptography"

Security Proofs The basic security claim is that the laws of QM ensure that there can be no undetected eavesdropping against a QKD protocol. I'd like you to look into the question of how one proves this claim (with respect to a specific protocol, such as BB84). The technical papers on this topic tend to be quite difficult; but the book by by Loepp and Wootters (listed above) provides a very accessible introduction.

19. Post-Quantum Cryptography If practical quantum computers are built someday (and it seems increasingly likely that they will be), most of the public key cryptography used today to secure the internet will be broken. The reason for this is the discovery of polynomial time quantum algorithms for integer factoring and discrete logarithms, the problems on which these widely used public key systems are based.

"Post-Quantum Cryptography" refers to cryptographic systems whose security is based on problems for which no polynomial time quantum algorithm is known. (This is different, of course, from saying that no polynomial time quantum algorithm is possible.) In looking for candidate post-quantum systems, it isn't enough just to look for systems whose security is based on problems other than factoring or discrete logarithms, since some systems based on other, more exotic problems have been broken by quantum algorithms as well.

The following book provides a detailed technical survey of candidate cryptosystems for a post-quantum world:

D. Bernstein, J. Buchmann, E. Dahmen, Post-Quantum Cryp- tography, Springer, 2009.

A project on this topic would consist of a tutorial paper on one of the public key cryptosystems described in this book. As usual, I am asking you to provide some technical depth in your paper; I am not looking for a survey. Please note that you do not have to know anything at all about quantum computing to work on this topic!

Here are some details on two of the candidate post-quantum cryptosystems.

NTRU is a public-key cryptosystem, introduced in 1995, whose security is based on the presumed intractability of the shortest vector problem on lattices. No polynomial time quantum algorithm is known for solving this problem. An expository paper on NTRU might be an interesting project for someone with a good background in abstract algebra. The original paper is J. Hoffstein, J. Pipher, J. Silverman, "NTRU: A Ring-Based Public Key Cryptosystem"

See also:

i. J. Hoffstein, J. Pipher, J. Silverman, Introduction to Mathematical Cryptography, Springer, 2009

ii. https://cseweb.ucsd.edu/classes/sp07/cse206a/ (the first four lectures here provide a useful introduction to lattices)

The McEliece cryptosystem was developed in the early days of public key cryptography; its security is based on the presumed hardness of some problems from coding theory. It has never been used much in practice because it requires very large key sizes. But the advent of quantum algorithms has led to renewed interest.

In connection with this topic I'd like to call the following paper to your attention:

Dinh, Moore, and Russell, "The McEliece Cryptosystem Resists Quan- tum Fourier Sampling Attacks".

First of all, let me say that I do not actually expect you to read this pa- per beyond the abstract and first page, since it presupposes considerable detailed knowledge of quantum computing. (In fact, the paper is hard to read even if you do have detailed knowledge of quantum computing.) But it's good to be aware of the paper, because it presents a very interesting result. The reason the result is interesting is that quantum Fourier sampling is the most widely used algorithmic technique in quantum computing. As far as I know, all polynomial time quantum algorithms that break existing cryptosystems are based on this technique. Thus, the discovery that the McEliece cryptosystem is resistant to this technique provides evidence that McEliece is resistant in general to quantum computing. Of course, it is still entirely possible that McEliece will be broken by a quantum algorithm that uses some other quantum algorithmic technique (techniques other than Fourier sampling do exist, and one expects new techniques to be discovered). But still, this is a very interesting result.

Request for Solution File

Ask an Expert for Answer!!
Computer Network Security: Write a program that automatically decrypts ciphertexts
Reference No:- TGS01219676

Expected delivery within 24 Hours