Feeds:
Posts
Comments

Archive for the ‘Mathematical Notation’ Category

I wrote a few weeks ago about the transmission of the knowledge of Hindu and Arabic mathematicians to Europe in the 12-1300’s and the translation of Fibonacci’s Liber Abaci into Italian dialect in the 1400’s.  Over the next 2-300 years, what is generally known as classical algebra began to take shape in Europe.  The symbols and techniques that we study today became widely used and accepted throughout Europe during this time period (1400-1800).

Most of the descriptions below come directly from Jeff Miller’s website Earliest Uses of Various Mathematical Symbols .  One of the primary resources for the information at that site is the math historian Florian Cajori.  If you are interested in the history of math, you will inevitably come across his name at some point!

The equal sign (=)

The equal symbol (=) was first used by Robert Recorde (c. 1510-1558) in 1557 in The Whetstone of Witte. He wrote, “I will sette as I doe often in woorke use, a paire of parralles, or Gemowe lines of one lengthe, thus : ==, bicause noe 2, thynges, can be moare equalle.” Recorde used an elongated form of the present symbol. He proposed no other algebraic symbol (Cajori vol. 1, page 164).

Here is an image of the page of The Whetstone of Witte on which the equal sign is introduced.

The equal symbol did not appear in print again until 1618, when it appeared in an anonymous Appendix, very probably due to Oughtred, printed in Edward Wright’s English translation of Napier’s Descriptio. It reappeared 1631, when it was used by Thomas Harriot and William Oughtred (Cajori vol. 1, page 298).

Cajori states (vol. 1, page 126):

“A manuscript, kept in the Library of the University of Bologna, contains data regarding the sign of equality (=). These data have been communicated to me by Professor E. Bortolotti and tend to show that (=) as a sign of equality was developed at Bologna independently of Robert Recorde and perhaps earlier.”Cajori elsewhere writes that the manuscript was probably written between 1550 and 1568.

The plus and minus symbols (+,-)

The + and – symbols first appeared in print in Mercantile Arithmetic or Behende und hüpsche Rechenung auff allen Kauffmanschafft, by Johannes Widmann (born c. 1460), published in Leipzig in 1489. However, they referred not to addition or subtraction or to positive or negative numbers, but to surpluses and deficits in business problems (Cajori vol. 1, page 128).

Here is an image of the first use in print of the + and – signs, from Widman’s Behennde vnd hüpsche Rechnung. This image is taken from the Augsburg edition of 1526.

The multiplication and division symbols

Multiplication

X was used by William Oughtred (1574-1660) in the Clavis Mathematicae (Key to Mathematics), composed about 1628 and published in London in 1631 (Smith). Cajori calls X St. Andrew’s Cross.

X actually appears earlier, in 1618 in an anonymous appendix to Edward Wright’s translation of John Napier’s Descriptio (Cajori vol. 1, page 197). However, this appendix is believed to have been written by Oughtred.

The dot (·) was advocated by Gottfried Wilhelm Leibniz (1646-1716). According to Cajori (vol. 1, page 267):

“The dot was introduced as a symbol for multiplication by G. W. Leibniz. On July 29, 1698, he wrote in a letter to John Bernoulli: “I do not like X as a symbol for multiplication, as it is easily confounded with x; … often I simply relate two quantities by an interposed dot and indicate multiplication by ZC · LM. Hence, in designating ratio I use not one point but two points, which I use at the same time for division.”

Cajori shows the symbol as a raised dot. However, according to Margherita Barile, consulting Gerhardt’s edition of Leibniz’s Mathematische Schriften (G. Olms, 1971), the dot is never raised, but is located at the bottom of the line. She writes that the non-raised dot as a symbol for multiplication appears in all the letters of 1698, and earlier, and, according to the same edition, it already appears in a letter by Johann Bernoulli to Leibniz dated September, 2nd 1694 (see vol. III, part 1, page 148).

The dot was used earlier by Thomas Harriot (1560-1621) in Analyticae Praxis ad Aequationes Algebraicas Resolvendas, which was published posthumously in 1631, and by Thomas Gibson in 1655 in Syntaxis mathematica. However Cajori says, “it is doubtful whether Harriot or Gibson meant these dots for multiplication. They are introduced without explanation. It is much more probable that these dots, which were placed after numerical coefficients, are survivals of the dots habitually used in old manuscripts and in early printed books to separate or mark off numbers appearing in the running text” (Cajori vol. 1, page 268).

However, Scott (page 128) writes that Harriot was “in the habit of using the dot to denote multiplication.” And Eves (page 231) writes, “Although Harriot on occasion used the dot for multiplication, this symbol was not prominently used until Leibniz adopted it.”

The asterisk (*) was used by Johann Rahn (1622-1676) in 1659 in Teutsche Algebra (Cajori vol. 1, page 211).

Division

Close parenthesis. The arrangement 8)24 was used by Michael Stifel (1487-1567 or 1486-1567) in Arithmetica integra, which was completed in 1540 and published in 1544 in Nuernberg (Cajori vol. 1, page 269; DSB).

The colon (:) was used in 1633 in a text entitled Johnson Arithmetik; In two Bookes (2nd ed.: London, 1633). However Johnson only used the symbol to indicate fractions (for example three-fourths was written 3:4); he did not use the symbol for division “dissociated from the idea of a fraction” (Cajori vol. 1, page 276).

Gottfried Wilhelm Leibniz (1646-1716) used : for both ratio and division in 1684 in the Acta eruditorum (Cajori vol. 1, page 295).

The obelus (÷) was first used as a division symbol by Johann Rahn (or Rhonius) (1622-1676) in 1659 in Teutsche Algebra (Cajori vol. 2, page 211).

Here is the page in which the division symbol first appears in print, as reproduced in Cajori.

Using letters to represent variables

Greek letters. The use of letters to represent general numbers goes back to Greek antiquity. Aristotle frequently used single capital letters or two letters for the designation of magnitude or number (Cajori vol. 2, page 1).

Diophantus (fl. about 250-275) used a Greek letter with an accent to represent an unknown. G. H. F. Nesselmann takes this symbol to be the final sigma and remarks that probably its selection was prompted by the fact that it was the only letter in the Greek alphabet which was not used in writing numbers. However, differing opinions exist (Cajori vol. 1, page 71).

In 1463, Benedetto of Florence used the Greek letter rho for an unknown in Trattato di praticha d’arismetrica. (Franci and Rigatelli, p. 314)

Roman letters. In Leonardo of Pisa’s Liber abbaci (1202) the representation of given numbers by small letters is found (Cajori vol. 2, page 2).

Jordanus Nemorarius (1225-1260) used letters to replace numbers.

Christoff Rudolff used the letters a, c, and d to represent numbers, although not in algebraic equations, in Behend vnnd Hubsch Rechnung (1525) (Cajori vol. 1, page 136).

Michael Stifel used q (abbreviation for quantita (which Cardan had already done) but he also used A, B, C, D, and F, for unknowns in 1544 in Arithmetica integra (Cajori vol. 1, page 140).

Girolamo Cardan (1501-1576) used the letters a and b to designate known numbers in De regula aliza (1570) (Cajori vol. 1, page 120).

In 1575 Guilielmus Xylander translated the Arithmetica of Diophantus from Greek into Latin and used N (numerus) for unknowns in equations (Cajori vol. 1, page 380).

Wikipedia quotes the MacTutor archive as saying that Arab mathematician Al-Qalasadi (b.1412-d.1482) was partially responsible for some uses of symbolism in algebra, and the MacTutor pages back this up.  However, Wikipedia goes on to say that

The symbol x now commonly denotes an unknown variable. Even though any letter can be used, x is the most common choice. This usage can be traced back to the Arabic word šay’ ??? = “thing,” used in Arabic algebra texts such as the Al-Jabr, and was taken into Old Spanish with the pronunciation “šei,” which was written xei, and was soon habitually abbreviated to x. (The Spanish pronunciation of “x” has changed since). Some sources say that this x is an abbreviation of Latin causa, which was a translation of Arabic ???. This started the habit of using letters to represent quantities in algebra. In mathematics, an “italicized x” (x\!) is often used to avoid potential confusion with the multiplication symbol.

This portion of their entry is not footnoted, and although it seems to make sense, I would investigate this idea further before accepting it as true.

Next week I’ll talk about the development of solving equations in classical algebra.  I mentioned in my very first blog entry here about how this led to the development of what is known as Modern Algebra or Abstract Algebra.  Next week we’ll look at some of the details of that development.

Advertisements

Read Full Post »