Центральный Дом Знаний - Algorithm 1

Информационный центр "Центральный Дом Знаний"

Заказать учебную работу! Жми!



ЖМИ: ТУТ ТЫСЯЧИ КУРСОВЫХ РАБОТ ДЛЯ ТЕБЯ

      cendomzn@yandex.ru  

Наш опрос

Как Вы планируете отдохнуть летом?
Всего ответов: 922

Онлайн всего: 1
Гостей: 1
Пользователей: 0


Форма входа

Логин:
Пароль:

Algorithm 1

High-level description:

  1. Assume the first item is largest.

  2. Look at each of the remaining items in the list and if it is larger than the largest item so far, make a note of it.

  3. The last noted item is the largest in the list when the process is complete.

(Quasi-)formal description: Written in prose but much closer to the high-level language of a computer program, the following is the more formal coding of the algorithm in pseudocode or pidgin code:

Algorithm LargestNumber

Input: A non-empty list of numbers L.

Output: The largest number in the list L.

largest ← L0

for each item in the list (Length(L)≥1), do

if the item > largest, then

largest ← the item

return largest

  • "←" is a loose shorthand for "changes to". For instance, "largest ← item" means that the value of largest changes to the value of item.

  • "return" terminates the algorithm and outputs the value that follows. 

The example-diagram of Euclid's algorithm from T.L. Heath 1908 with more detail added. Euclid does not go beyond a third measuring and gives no numerical examples. Nicomachus gives the example of 49 and 21: "I subtract the less from the greater; 28 is left; then again I subtract from this the same 21 (for this is possible); 7 is left; I subtact this from 21, 14 is left; from which I again subtract 7 (for this is possible); 7 will be left, but 7 cannot be subtracted from 7." Heath comments that "The last phrase is curious, but the meaning of it is obvious enough, as also the meaning of the phrase about ending "at one and the same number"(Heath 1908:300).

Euclid’s algorithm appears as Proposition II in Book VII ("Elementary Number Theory") of his Elements. Euclid poses the problem: "Given two numbers not prime to one another, to find their greatest common measure". He defines "A number [to be] a multitude composed of units": a counting number, a positive integer not including 0. And to "measure" is to place a shorter measuring length s successively (q times) along longer length l until the remaining portion r is less than the shorter length s. In modern words, remainder r = l − q*s, q being the quotient, or remainder r is the "modulus", the integer-fractional part left over after the division.

For Euclid’s method to succeed, the starting lengths must satisfy two requirements: (i) the lengths must not be 0, AND (ii) the subtraction must be "proper”, a test must guarantee that the smaller of the two numbers is subtracted from the larger (alternately, the two can be equal so their subtraction yields 0).

Euclid's original proof adds a third: the two lengths are not prime to one another. Euclid stipulated this so that he could construct a reductio ad absurdum proof that the two numbers' common measure is in fact the greatest. While Nicomachus' algorithm is the same as Euclid's, when the numbers are prime to one another it yields the number "1" for their common measure. So to be precise the following is really Nicomachus' algorithm.

A graphical expression on Euclid's algorithm using example with 1599 and 650.

Example of 1599 and 650 :

Step 1

1599 = 650*2 + 299

Step 2

650 = 299*2 + 52

Step 3

299 = 52*5 + 39

Step 4

52 = 39*1 + 13

Step 5

39 = 13*3 + 0

Only a few instruction types are required to execute Euclid's algorithm—some logical tests (conditional GOTO), unconditional GOTO, assignment (replacement), and subtraction.

  • A location is symbolized by upper case letter(s), e.g. S, A, etc.

  • The varying quantity (number) in a location will be written in lower case letter(s) and (usually) associated with the location's name. For example, location L at the start might contain the number l = 3009.


"Inelegant" is a translation of Knuth's version of the algorithm with a subtraction-based remainder-loop replacing his use of division (or a "modulus" instruction). Derived from Knuth 1973:2–4. Depending on the two numbers "Inelegant" may compute the g.c.d. in fewer steps than "Elegant".

The following algorithm is framed as Knuth's 4-step version of Euclid's and Nichomachus', but rather than using division to find the remainder it uses successive subtractions of the shorter length s from the remaining length r until r is less than s. The high-level description, shown in boldface, is adapted from Knuth 1973:2–4:

INPUT:

1 [Into two locations L and S put the numbers l and s that represent the two lengths]: INPUT L, S

2 [Initialize R: make the remaining length r equal to the starting/initial/input length l] R ← L

E0: [Insure r ≥ s.]

3 [Insure the smaller of the two numbers is in S and the larger in R]: IF R > S THEN the contents of L is the larger number so skip over the exchange-steps 4, 5 and 6: GOTO step 6 ELSE swap the contents of R and S.]

4 L ← R (this first step is redundant, but will be useful for later discussion).

5 R ← S

6 S ← L

E1:[Find remainder]: Until the remaining length r in R is less than the shorter length s in S, repeatedly subtract the measuring number s in S from the remaining length r in R.

7 IF S > R THEN done measuring so GOTO 10 ELSE measure again,

8 R ← R − S

9 [Remainder-loop]: GOTO 7.

E2: [Is the remainder 0?]: EITHER (i) the last measure was exact and the remainder in R is 0 program can halt, OR (ii) the algorithm must continue: the last measure left a remainder in R less than measuring number in S.

10 IF R = 0 then done so GOTO step 15 ELSE continue to step 11,

E3: [Interchange s and r ]: The nut of Euclid's algorithm. Use remainder r to measure what was previously smaller number s:; L serves as a temporary location.

11 L ← R

12 R ← S

13 S ← L

14 [Repeat the measuring process]: GOTO 7

OUTPUT:

15 [Done. S contains the greatest common divisor]: PRINT S

DONE:

16 HALT, END, STOP.

[edit]An elegant program for Euclid's algorithm

The following version of Euclid's algorithm requires only 6 core instructions to do what 13 are required to do by "Inelegant"; worse, "Inelegant" requires moretypes of instructions. The flowchart of "Elegant" can be found at the top of this article. In the (unstructured) Basic language the steps are numbered, and the instruction LET [ ] = [ ] is the assignment instruction symbolized by ←.

5 REM Euclid's algorithm for greatest common divisor

6 PRINT "Type two integers greater than 0"

10 INPUT A,B

20 IF B=0 THEN GOTO 80

30 IF A > B THEN GOTO 60

40 LET B=B-A

50 GOTO 20

60 LET A=A-B

70 GOTO 20

80 PRINT A

90 END

How "Elegant" works: In place of an outer "Euclid loop", "Elegant" shifts back and forth between two "co-loops", an A > B loop that computes A ← A − B, and a B ≤ A loop that computes B ← B − A. This works because, when at last the minuend M is less than or equal to the subtrahend S ( Difference = Minuend − Subtrahend), the minuend can become s (the new measuring length) and the subtrahend can become the new r (the length to be measured); in other words the "sense" of the subtraction reverses. 

Does an algorithm do what its author wants it to do? A few test cases usually suffice to confirm core functionality. One source[47] uses 3009 and 884. Knuth suggested 40902, 24140. Another interesting case is the two relatively-prime numbers 14157 and 5950.

But exceptional cases must be identified and tested. Will "Inelegant" perform properly when R > S, S > R, R = S? Ditto for "Elegant": B > A, A > B, A = B? (Yes to all). What happens when one number is zero, both numbers are zero? ("Inelegant" computes forever in all cases; "Elegant" computes forever when A = 0.) What happens if negative numbers are entered? Fractional numbers? If the input numbers, i.e. the domain of the function computed by the algorithm/program, is to include only positive integers including zero, then the failures at zero indicate that the algorithm (and the program that instantiates it) is a partial function rather than a total function. A notable failure due to exceptions is the Ariane V rocket failure.

Proof of program correctness by use of mathematical induction: Knuth demonstrates the application of mathematical induction to an "extended" version of Euclid's algorithm, and he proposes "a general method applicable to proving the validity of any algorithm". Tausworthe proposes that a measure of the complexity of a program be the length of its correctness proof.

Elegance (compactness) versus goodness (speed) : With only 6 core instructions, "Elegant" is the clear winner compared to "Inelegant" at 13 instructions. However, "Inelegant" is faster (it arrives at HALT in fewer steps). Algorithm analysis indicates why this is the case: "Elegant" does two conditional tests in every subtraction loop, whereas "Inelegant" only does one. As the algorithm (usually) requires many loop-throughs, on average much time is wasted doing a "B = 0?" test that is needed only after the remainder is computed.

Can the algorithms be improved?: Once the programmer judges a program "fit" and "effective"—that is, it computes the function intended by its author—then the question becomes, can it be improved?

The compactness of "Inelegant" can be improved by the elimination of 5 steps. But Chaitin proved that compacting an algorithm cannot be automated by a generalized algorithm;[51] rather, it can only be done heuristically, i.e. by exhaustive search (examples to be found at Busy beaver), trial and error, cleverness, insight, application of inductive reasoning, etc. Observe that steps 4, 5 and 6 are repeated in steps 11, 12 and 13. Comparison with "Elegant" provides a hint that these steps together with steps 2 and 3 can be eliminated. This reduces the number of core instructions from 13 to 8, which makes it "more elegant" than "Elegant" at 9 steps.

The speed of "Elegant" can be improved by moving the B=0? test outside of the two subtraction loops. This change calls for the addition of 3 instructions (B=0?, A=0?, GOTO). Now "Elegant" computes the example-numbers faster; whether for any given A, B and R, S this is always the case would require a detailed analysis.

It is frequently important to know how much of a particular resource (such as time or storage) is theoretically required for a given algorithm. Methods have been developed for the analysis of algorithmsto obtain such quantitative answers (estimates); for example, the sorting algorithm above has a time requirement of O(n), using the big O notation with n as the length of the list. At all times the algorithm only needs to remember two values: the largest number found so far, and its current position in the input list. Therefore it is said to have a space requirement of O(1), if the space required to store the input numbers is not counted, or O(n) if it is counted.

Different algorithms may complete the same task with a different set of instructions in less or more time, space, or 'effort' than others. For example, a binary search algorithm will usually outperform abrute force sequential search when used for table lookups on sorted lists.

The analysis and study of algorithms is a discipline of computer science, and is often practiced abstractly without the use of a specific programming language or implementation. In this sense, algorithm analysis resembles other mathematical disciplines in that it focuses on the underlying properties of the algorithm and not on the specifics of any particular implementation. Usuallypseudocode is used for analysis as it is the simplest and most general representation. However, ultimately, most algorithms are usually implemented on particular hardware / software platforms and their algorithmic efficiency is eventually put to the test using real code.

Empirical testing is useful because it may uncover unexpected interactions that affect performance. Benchmarks may be used to compare before/after potential improvements to an algorithm afterprogram optimization.

There are various ways to classify algorithms, each with its own merits.

One way to classify algorithms is by implementation means.

  • Recursion or iteration: A recursive algorithm is one that invokes (makes reference to) itself repeatedly until a certain condition matches, which is a method common to functional programming.Iterative algorithms use repetitive constructs like loops and sometimes additional data structures like stacks to solve the given problems. Some problems are naturally suited for one implementation or the other. For example, towers of Hanoi is a well understood in recursive implementation. Every recursive version has an equivalent (but possibly more or less complex) iterative version, and vice versa.

  • Logical: An algorithm may be viewed as controlled logical deduction. This notion may be expressed as: Algorithm = logic + control. The logic component expresses the axioms that may be used in the computation and the control component determines the way in which deduction is applied to the axioms. This is the basis for the logic programming paradigm. In pure logic programming languages the control component is fixed and algorithms are specified by supplying only the logic component. The appeal of this approach is the elegant semantics: a change in the axioms has a well-defined change in the algorithm.

    next

Loading

Календарь

«  Май 2024  »
ПнВтСрЧтПтСбВс
  12345
6789101112
13141516171819
20212223242526
2728293031

Архив записей

Друзья сайта

  • Заказать курсовую работу!
  • Выполнение любых чертежей
  • Новый фриланс 24