Реферат

Реферат на тему Disk Defragmentation Essay Research Paper Fast reliable

Работа добавлена на сайт bukvasha.net: 2015-06-12

Поможем написать учебную работу

Если у вас возникли сложности с курсовой, контрольной, дипломной, рефератом, отчетом по практике, научно-исследовательской и любой другой работой - мы готовы помочь.

Предоплата всего

от 25%

Подписываем

договор

Выберите тип работы:

Скидка 25% при заказе до 22.11.2024


Disk Defragmentation Essay, Research Paper

Fast, reliable disk performance is mandatory in today?s complex information technology world, because system response and virtual memory performance depend on fast disk I/O. unfortunately, common disk fragmentation leads to significant performance degradation. According to one vendor, 58 percent of any I/O operation is disk seek time. So reducing seek time by defragmenting files can play an important part in increasing application performance. How such an intelligent file placement program works and what does it really speed up windowsNT machines?

Computer?s performance

Computer machines play a major and fundamental role in today?s world. Almost all applications and studies require a big chunk of work, if not all, to be done on these machines. Thus improving their operation and performance will result in a better throughput and an important improvement of the work. In fact, when they want to improve the operation of a computer running the windowsNT operating system, most people think of adding RAM (increasing memory) or upgrading to a more powerful processor. Recent research studies prove there is a simpler solution : every so often, do a disk defragmentation. This feature, available on windowsNT systems, can more than triple system responsiveness, according to a study by the independent software testing laboratory NSTL Inc., Conshohoken Pa. Almost all operating systems suffer the consequences of files? organization on disks, however very few support such a defragmentation software, mainly because their markets are smaller than windows? and less appealing to software developpers. Actually, just by running such a software, both individuals and corporations can enjoy faster system performance. How can this be interpreted?

Disk defragmentation at a glance

Files or data existing on a computer are stored on non volatile storage devices called magnetic disks; they are usually fragmented into pieces or fragments scattered all over the disk. This fragmentation naturally occurs as a user creates, appends, deletes, or truncates files during normal system use. When the first file is saved to a disk, it is laid down on a track in contiguous clusters. In other words, the read/write head can move directly from one cluster to the next in one continuous, smooth operation. The head stays in one place over a single track and reads or writes the file as the disk moves beneath it. As more files are added, they too are written in contiguous clusters. As files are erased, they leave empty clusters that new files can be written to. Unfortunately, some of these clusters are not big enough to hold the new files. As a result, one fragment of the file is written to one cluster, and the rest of the file is divided ? or fragmented ? among whatever empty clusters exist on the disk. A disk that has undergone long or intense use shows little pattern or logic in the location of files. File fragments can be any distance from each other and from the read-write head.

Fragmentation causes the drive to write and read information more slowly because the read/write head must spend time moving from track to track and waiting for the file fragments or empty clusters in those tracks to pass under it as the disk spins. When the level of disk fragmentation increases, overall system performance begins to decline. With defragmentation, the disk drive?s read/write head has less distance to travel and the left-over space being all in one place, will accept a larger chunk of file data. This type of disk activity actually goes on smoothly and contiguously, and is transparent to the user.

Fragmentation does not endanger user data; a heavily fragmented system will continue to do its work. But when files become dispersed in pieces over the disk, excessive read and write activity slows what is already the slowest component in the system. A typical fragmented disk is illustrated in the figure below.

The way to maintain fast I/O performance is to minimize the amount of fragmentation on the disk. The disk defragmenter solves the problem by periodically moving all of a file’s fragments into one contiguous file.

A file of 10 records (shown in yellow at the top) can either be stored in contiguous locations, with all records immediately adjacent to each other, or scattered in different disk locations. Free space fragmentation occurs when files (like the three shown in orange at the bottom) are not arranged contiguosly but are dispersed into three separate locations.

How defragmentation works

Defragmentation, sometimes called disk optimization, is a software-controlled operation that consolidates the scattered parts of files so that they are once again contiguous. Defragmenting begins with the software temporarily moving contiguous clusters of data to other, unused areas of the drive, which opens up areas of free contiguous space available for recording files. The drive then moves fragmented parts of a single file to the newly opened space, laying down the parts so that they are contiguous. After that, the defragmentation software juggles files and parts of files until as many files as possible are contiguous.

This window is opened when the disk defragmenter starts scanning the disk. Details about moving the blocks can be seen by clicking on Show Details. The following box appears.

Measuring fragmentation?s impact

The fragmenetd file is the ususal thing these days. Studies and statistics were done in order to ensure a better performance resulting from defragmentation. International data numbers suggest that, worldwide, corporations are losing as much as US $50 billion per year in worker productivity and extra equipment costs by not tiding up the files on every server and workstation on their networks at regular intervals. Of that sum, they estimate that $6 billion is due to unnecessary hardware upgrades needed to mask the performance loss due to piecemal file storage. According to Steve Widen, director of International Data?s storage software research, ?By using a defragmentation utility, it is possible to acieve performance gains that meet or exceed many hardware upgrades. From a cost standpoint alone, this is an attractive proposition?.

Bibliography

IEEE spectrum magazine (Sep 2000)


1. Реферат Приемы массажа при остеохондрозе
2. Реферат Електропровідність діелектричних матеріалів та діелектричні втрати й пробої в них
3. Реферат Шкіра Будова та функції шкіри
4. Реферат 1768 год
5. Доклад на тему Дымы металлов
6. Реферат на тему Open Campus Essay Research Paper Westview High
7. Реферат на тему Another Ernest Hemingway Essay Research Paper A
8. Биография Осипов, Николай Петрович
9. Реферат Возникновение науки и основные стадии ее исторической эволюции
10. Биография на тему Кант Иммануил