Реферат

Реферат на тему Internet Security Overview Essay Research Paper 1

Работа добавлена на сайт bukvasha.net: 2015-06-14

Поможем написать учебную работу

Если у вас возникли сложности с курсовой, контрольной, дипломной, рефератом, отчетом по практике, научно-исследовательской и любой другой работой - мы готовы помочь.

Предоплата всего

от 25%

Подписываем

договор

Выберите тип работы:

Скидка 25% при заказе до 23.11.2024


Internet Security Overview Essay, Research Paper

1 Introduction

The recent acceleration in the uptake of electronic commerce (e-commerce) over the Internet has focused the need for methods to be developed by which to securely transfer data over what amounts to a worldwide public network. The most commonly cited example of this requirement is the ability of customers to make electronic purchases from company Web sites using debit cards such as VISA cards. Public confidence in e-commerce has to be high for it to succeed and to continue to grow, whether via existing debit card transactions or more tightly integrated electronic cash systems.

In the academic world, the need for security in data exchanges is not intuitively seen to be so high. Whereas e-commerce relies on secure channels between sites that may often lie on opposite sides of the globe, the nature of “sensitive” academic transactions is more likely to be localised within individual campuses. However, the requirement is still there. For example exam marks may need to be entered by university departments into centrally maintained databases, centralised purchasing may lead to financial information being exchanged, and of course remote computing access (or Web-based booking systems for such access) may result in password information being transmitted.

This report looks at the potential for widespread deployment of Secure Internet Protocols within UK HEIs, offering an overview of what are likely to be the important issues involved. We review past JISC reports on security, on existing and future technology, and we comment on the current stances of UKERNA and, as far as can be deduced, the UK Government. The report concludes with some key observations.

2 Overview of Secure Protocol Technology

The case for adoption of Secure Internet Protocol technology is one made very strongly by Phil Zimmermann, author of the public domain PGP (Pretty Good Privacy) system (1991):

“Today, if the Government wants to violate the privacy of ordinary citizens, it has to expend a certain amount of expense and labor to intercept and steam open and read paper mail, and listen to and possibly transcribe spoken telephone conversation. This kind of labor-intensive monitoring is not practical on a large scale. This is only done in important cases when it seems worthwhile. More and more of our private communications are being routed through electronic channels. Electronic mail is gradually replacing conventional paper mail. E-mail messages are just too easy to intercept and scan for interesting keywords. This can be done easily, routinely, automatically, and undetectably on a grand scale.”

He adds: “If privacy is outlawed, only outlaws will have privacy. Intelligence agencies have access to good cryptographic technology. So do the big arms and drug traffickers. So do defense contractors, oil companies, and other corporate giants. But ordinary people and grassroots political organizations mostly have not had access to affordable military grade public-key cryptographic technology. Until now, PGP empowers people to take their privacy into their own hands. There’s a growing social need for it. That’s why I wrote it.”

PGP is just one means to obtain privacy when communicating (e.g. by e-mail) on a LAN or over the worldwide Internet. The scope for adoption of secure protocols is broad.

2.1 One-time Passwords

It is important to understand that the issue of authentification (proving who sent the message) is different to the issue of secure transmission (preventing the message being “snooped” and read in transit, or being altered in transit). There may be many cases where confirming identity is all that is required, the most common instance probably being a password system for a user login.

The danger for users entering passwords over the Internet (e.g. a lecturer on sabbatical in the USA accessing their home university account on JANET) is of course that the password may be snooped.

Current technology to avoid this problem tends to focus on one-time passwords, passwords which if compromised could not be used again successfully, as the password changes with each use. One instance is S/Key, by which a user has a list of passwords supplied, and each password is discarded once used. This causes a need for risk assessment between the danger of having a password list (which could be printed, or photocopied) and having the password snooped in transit.

Security Dynamics’ solution to this problem is a product known as SecurID. Here, a user has a small card or key fob (a token) on which is displayed a 6-digit number which changes each 60 seconds. Each token has a unique serial number which is registered with the authenticating software, which, if the authenticating server and token maintain synchronised time, allows that server to know the code on display to the user. This is a one-time password system with the benefit of no “written down” component. A PIN code is also required for authentication, should the token fall into the wrong hands (the old security maxim of “something you know, plus something you have”). Drawbacks are that SecurID requires replacement of service software on hosts using it, the token can be lost, and the PIN can still be snooped (leaving only the six digit protection).

SecurID is being evaluated at Southampton as part of JTAP Project 631, which is investigating methods by which secure access can be given to remote and transient users on a (campus) network. SecurID is supported by CheckPoint in their Firewall-1 product, allowing users to authenticate through a firewall (which may then initiate a secure channel). The “next generation” of SecurID promises to be BOKS, a system that does not rely on a physical token. This technology is also under investigation under JTAP Project 631.

While a number of US Universities have bought into SecurID, it is not clear that widespread adoption of such a technology would be practical for the UK. The cost per token is certainly one deterrent. Smart card technology, covered later in this report, would seem to offer a more flexible and (probably) less vendor-specific solution.

2.2 Kerberos

An alternative to one-time passwords, The Kerberos Authentication System uses a series of DES-encrypted messages to prove to a server that a client is running on behalf of a particular user.

A simplified description of the Kerberos protocol is as follows: When a client wishes to contact a particular server, it first contacts an authentication server (AS). Both the user and the server are required to have keys registered with the AS; the user’s key is derived from a password that they choose, and the server key is randomly generated. The AS creates a new random key, called the session key. It encrypts one copy of the session key with the server’s key, along with the name of the user and an expiration time. This is known as the ticket. The AS then creates a new copy of the session key, encrypts it with the user’s key, and passes both it and the ticket to the client. The client can then decode the session key and create an authenticator, which contains (among other things) the current time. The authenticator is encrypted using the session key. The ticket and the authenticator are then passed to the server by the client, which decrypts the ticket and uses the resultant session key to decrypt the authenticator. If the time that is extracted from the authenticator is the current time (in practice a leeway of around 5 minutes is allowed), then the user is authenticated.

Kerberos requires that client and server software are modified in order for it to be used; however, an increasing quantity of software now has Kerberos built in, and support is promised in Windows 2000 Server.

2.3 Public Key Infrastructure

Public key cryptography offers another authentication (and encryption) solution. It works on the basis that two keys can be generated, each of which decodes data encrypted by the other. A public key system such as PGP allows users to generate private/public key pairs. One key is retained by the user as a private key, the other is released as a public key. Authentication can then be achieved by the sender signing a message with their private key – the recipient with the public key then knows when decrypting that only the sender holds the unique private key with which the message was originally encrypted. For privacy, the sender encrypts with the recipient’s public key, so that only the recipient can decode the data. Because the encryption method can be computationally expensive, authentication typically (e.g. in PGP) involves just encrypting an MD5 hash of the message (which in itself further protects against tampering) and privacy involves encrypting an IDEA key which in turn is used to encode the message text. This system is believed to be robust, and PGP has been in service for some eight years using it. The main use of PGP is for secure e-mail. Its main weakness remains in the trust placed on the public key. If a user signs (encrypts) a message with an imposter’s public key, thinking it to be the intended recipient’s real public key, the imposter can decode the message with their own private key which matches the fake public key. For this reason, “trusted” public key servers have been set up for PGP, and many conferences and meetings feature “key signing” events.

In recent years Certificate Authorities (CAs) have blossomed on the Internet. The market leaders are currently Verisign and Thawte. A company which wants to offer a “secure” Web site can obtain a certificate from a CA which contains the company’s public key and which is also encrypted by Verisign using their private key. When a customer wants to access the company Web site to (for example) buy a product online, their browser inspects the company’s certificate. Because Verisign have their public key built in to all common browsers (e.g. Netscape Communicator and Microsoft Internet Explorer) the customer’s browser can verify the certificate (to identify the company) and then use the company’s authenticated public key when exchanging data with the Web site. The one leap of faith here is that the customer trusts the built-in certificate (which they may not even be aware of). Since they’re running the browser code anyway, that leap of faith is not so big.

The hot issue at present is the building of a trustable Public Key Infrastructure (PKI). One method to circulate public keys is by building them into the browser. PGP users often trust public keys displayed on Web pages, or even received in e-mails from the (supposed) sender. If Web page keys are to be trusted, one might argue that it is better to abstract public key distribution to the DNS (Domain Name Service), and work is ongoing in that area. One emerging standard for PKI appears to be X.509v3 certificates, with LDAP as the directory service to serve them.

2.4 X509 Digital Certificates

A digital certificate is an electronic statement signed by an independent trusted third party, typically a Certification Authority. The X509 standard defines the format for these certificates, incorporating information about the subject being certified, including:

Subject Identification: data about the object being certified (a person’s name, e-mail address, organisation)

Public Key Information: the public key of the subject being certified (usually an RSA public key, in a similar vein to PGP signatures)

Certifying Authority signature: the trusted third party digital signature certifying the two other core pieces of information in this certificate

Plug-ins are available for a number of commercial e-mail and USENET news applications, including Outlook, Outlook Express and Eudora. The plug-ins can verify the authenticity of documents signed using digital certificates issued by Thawte or Verisign. They also have the ability to encrypt and decrypt documents for secured delivery to remote recipients over an otherwise insecure network.

The common mechanism by which applications send signed or encrypted documents using digital certificates is S/MIME (Secure Multipurpose Internet Mail Extensions). S/MIME is very similar in operation to PGP in that it also offers the ability to sign and/or encrypt messages. However, S/MIME is more flexible than its PGP counterpart in that it is not limited to just X509 certificate authentication and it is not restricted to just encrypting/authenticating message data. For example, S/MIME has the ability to include multiple sub-documents within an e-mail, where each sub-document can be signed by different parties. In fact, one sub-document might even be a PGP signed message itself!

2.5 Secure Socket Layer (SSL)

The primary use for SSL is secure access to Internet Web servers. SSL operates via public key encryption. In addition to exchange of keys, SSL allows negotiation of a cipher algorithm for the session. Algorithms include 3DES (triple encryption DES), IDEA (as also used by PGP), and RC2 or RC4 (which in their US export versions can only be used with 40-bit encryption as opposed to 128-bit).

It is worth noting that 40-bit encryption has been known to be vulnerable for some time (witness a C-Net News article in 1997). Whitfield Diffie’s paper on “Minimal Key Lengths for Symmetric Ciphers to Provide Adequate Commercial Security” (1996) suggests that “bearing in mind that the additional computational costs of stronger encryption are modest, we strongly recommend a minimum key-length of 90 bits for symmetric cryptosystems.”

The SSL protocol is likely to be superceded in use by the forthcoming Transport Layer Security (TLS) protocol.

2.6 Firewalls

A firewall system is in essence a network router that also performs filtering on the traffic which passes through it. The level of deployment of firewalls within the HE community is not known by the authors, but it is suspected that many institutions have no system in use, either on their point of presence to JANET or internally within the institution.

It is important to recognise that while the introduction of firewall technology has to be seen as a Good Thing, the mere addition of a firewall system to a network’s entry point to the Internet does not guarantee complete security. In academic circles, it may be that, due to the pressures imposed by supporters of “academic freedom”, such firewalls run in “default allow” mode rather than “default deny”, thus only blocking a subset of known potential attack avenues for intruders. And even if “default deny” mode is used, there is the potential for intruders to gain access through allowed services (such as POP or IMAP mail servers, web servers, or SMTP e-mail hubs) if these are not tightly configured. JTAP Project 631 is piloting the “smooth” introduction of a “default deny” firewall at Southampton. The same project is investigating methods for allowing secure remote access through firewalls.

Whether or not a firewall is deployed, secure protocols are still required both to protect the integrity and privacy of data, and also for authentication of users in a transaction. A number of firewall products have built-in support for authentication or encryption systems (e.g. Firewall-1 has support for securID).

2.7 Secure Shell (SSH)

The SSH suite of utilities offers secure replacements for the standard Unix utilities rlogin, rsh and rcp. It provides a secure encrypted communication channel between two machines over an insecure network. The channel is used for the interactive login session, but other traffic can be piggy-backed on the channel, such as the X protocol, thus benefiting from the security provided. The channel can also be compressed, which is a major benefit over slow links such as modems and international connections.

Authentication can be done by conventional techniques (such a plain text password or the Unix .rhosts mechanism) or by using RSA public-key cryptography. The latter uses public and private keys associated with a user, and a pass phrase is used to authenticate the user (the pass phrase is typed in locally, and not sent across the network). Machines also have public and private keys and these can be used to stop security breaches via machine spoofing (IP spoofing, DNS spoofing or routing spoofing).

SSH clients are available for both Unix and Microsoft operating systems.

2.8 Securing Workstation File Storage

Whatever secure protocols are used to transfer a document across a network, there will often be a copy of the document held somewhere on the user’s workstation. It is very easy for the workstation user to accidentally give away access to confidential documents to other people and other computers. To cite two examples, it could be done on a Microsoft Windows system by having an insecure administrator password and “sharing” entire hard disks; it could be done on a Unix system by using insecure passwords or by lack of user knowledge about file access controls. These concerns also apply to backups, which can be encrypted or password protected.

There are products on the market that help to solve these problems: these work by encrypting data at either the file level or file-system level. Microsoft Windows NT 2000 proclaims to have the ability to encrypt at the file-system level: as long as users logout from their workstations, their files should be unreadable by others. Data Fellows market a product “F-Secure FileCrypto” which claims to provide similar facilities by integrating encryption services tightly with the file-system and user-interface.

In addition to facilities for encrypting and decrypting individual files, PGP for Microsoft Windows and for the Macintosh also contain a product called “PGPDisk”. This creates a new file system (represented as a new drive letter) containing files that are always stored encrypted. This provides users with a very simple means of keeping a collection of files secure.

2.9 Secure Internet Protocol (IPsec)

Rather than putting extra load on client applications to diligently authenticate and validate the integrity of data they receive, a more appropriate technique may be to place all of the security and data integrity related functionality down into the network, or Internet Protocol (IP), layer.

The inherent problem at the moment is that the existing Internet Protocol (IPv4) was never designed with security in mind; IP is principally there to route datagrams over a network on a best effort basis, with the transport layer (TCP) ensuring reliable, but not secure, delivery.

An emerging protocol that attempts to correct this weakness and thus offer truly transparent IP delivery (in that higher-layers need not worry about data integrity) is Secure-IP, or IPsec.

The purpose of IPsec is to provide a standard mechanism for protecting all traffic on a network transparently, irrespective of the application. It can protect all traffic against unauthorized modification and eavesdropping and securely authenticate the parties that are communicating with each other. It renders most of the commonly used security attack methods ineffective. IPsec is a means by which secure VPNs can be offered over public network backbones.

However, IPsec is very much a retrospective “bolt-on” to the existing IP. This will change in the near future because IPv6, the next generation IP, has IPsec in its core specification. That is to say that all IPv6 capable devices must fully support the IPsec suite of protocols for authentication and encryption.

At the present time, it is not clear how readily IPsec as is could be deployed over JANET. It may be that JANET will have to wait until IPv6 comes to fruition. Whilst the core protocol specifications are well along the standards track, it is unlikely that IPv6 will be deployed commercially for several years yet.

Production IPv6 networks exist (notably the 6REN and WIDE projects) and production IPv6 stacks exist for the more popular operating systems and network hardware. However, there is currently little activity in the UK as most people are waiting for deployment success stories to be published before taking the risk of migrating to IPv6. At the time of writing, only the University of Southampton (the authors of this report) and Lancaster University are early adopters of IPv6 within JANET, and only Southampton have native IPv6 WAN links in use (to UUNET UK). The establishment of the European-led IPv6 Forum, which within a month of its launch already has 50 companies signed up to it, is a positive sign for future IPv6 growth.

2.10 Areas under Study by the IETF

The Internet Engineering Task Force (IETF) is the main body promoting new Internet standards (RFCs, or Requests for Comments). They have a number of Working Groups that drive forward activities in various areas. To appreciate the current hot topics in security, a good place to look is the list of IETF Security-related Working Groups:

An Open Specification for Pretty Good Privacy (openpgp)

Authenticated Firewall Traversal (aft)

Common Authentication Technology (cat)

Domain Name System Security (dnssec)

IP Security Protocol (ipsec)

Intrusion Detection Exchange Format (idwg)

One Time Password Authentication (otp)

Public-Key Infrastructure (X.509) (pkix)

S/MIME Mail Security (smime)

Secure Shell (secsh)

Simple Public Key Infrastructure (spki)

Transport Layer Security (tls)

Web Transaction Security (wts)

Which Groups are of relevance to this overview? In short, all of them, but there are two worth mentioning in particular. One is the Public-Key Infrastructure (X.509) Working Group (one chair of which is from Verisign). They promote X.509v3: “Many Internet protocols and applications which use the Internet employ public-key technology for security purposes and require a public-key infrastructure (PKI) to securely manage public keys for widely-distributed users or systems. The X.509 standard constitutes a widely-accepted basis for such an infrastructure, defining data formats and procedures related to distribution of public keys via certificates digitally signed by certification authorities (CAs).”

The other is the Transport Layer Security (TLS) Group. TLS was designed to supercede SSL3.0, and Version 1.0 made full RFC status in 1999. According to this RFC, the goals of the TLS Protocol include cryptographic security, interoperability (”independent programmers should be able to develop applications utilizing TLS that will then be able to successfully exchange cryptographic parameters without knowledge of one another’s code”) and extensibility (”TLS seeks to provide a framework into which new public key and bulk encryption methods can be incorporated as necessary”). TLS is currently public key oriented, and thus a set of Kerberos Cipher Suites is also being planned.

The IETF will continue to deliver solutions for security problems on the Internet through RFCs. It is very important to track their activities, and in-house solutions that ignore the IETF Draft and final RFCs risk becoming isolated. The development of the TLS is a good sign for interoperability, with at least one open source implementation (OpenSSL) already available.


1. Контрольная_работа на тему Античный период в истории естествознания Состав и строение клетки
2. Сочинение на тему Портрет Владыки мрака в Поэме без героя
3. Реферат на тему Из истории религии ранний этап становления человеческого общества
4. Реферат на тему Высвобождение персонала 2
5. Реферат на тему Loneliness Essay Research Paper Loneliness Loneliness is
6. Курсовая Правовые основы федерализма в Российской Федерации
7. Реферат Межличностные конфликты 4
8. Реферат на тему Leviticus 1912 1518 Essay Research Paper Leviticus
9. Реферат Паутинообразная модель
10. Курсовая Значение лекарственных веществ и лекарственных форм, содержащих антибиотик