Archived PushToTest site

Script Driven Tests With Jython

Script-Driven Tests with Jython


Overview

This section contains the following sections:

Test Agent Scripts

Example Test Agent Scripts

Jython Script Guide

Data Types

Numbers

Strings

Unicode Strings

Lists

First Steps Towards Programming

Flow Control

if

for

range ( )

break, continue, else loops

pass

Defining Functions

More on Defining Functions

Default Argument Values

Keyword Arguments

Arbitrary Argument Lists

Lambda Forms

Documentation Strings

Data Structures

More on Lists

append(x)

extend(L)

insert(i,x)

remove(x)

pop([i])

index(x)

count(x)

sort( )

reverse( )

Using Lists as Stacks

Using Lists as Queues

Functional Programming Tools

List Comprehensions

del

Tuples and Sequences

Dictionaries

More on Conditions

Comparing Sequences and Other Types

Modules

More on Modules

The Module Search Path

Compiled Python Files

Standard Modules

dir( )

Packages

Importing * From a Package

Intra-Package References

Input and Output

Fancier Output Formatting

Reading and Writing Files

Methods of File Objects

The pickle Module

Errors and Exceptions

Syntax Errors

Exceptions

Handling Exceptions

Raising Exceptions

User-defined Exceptions

Defining Clean-Up Actions

Classes

Terminology

Python Scopes and Name Spaces

Definitions.

First Look at Classes

Random Remarks

Inheritance

Multiple Inheritance

Private Variables

Odds and Ends

Floating Point Numbers

Representation Error

Working With Scripting Languages

Writing A Test Script

Running A Test Script

Using Agentbase In A Web Test Script

Jump Start A Web Test Script

Transform A TestGen4Web Test Into A Jython Test Script

Use The Recorder To Write a Jython Test Script

Log To

Log Path / Name

Log Level

Sleep Time

Success Responses

Follow HTTP 302 Redirects

Load <IMG> Tag References

Load <IMG> Tag References

Emulate Browser Image Caching

When Is It Appropriate To Use TestGen4Web and the Recorder?

Using TOOL Protocol Handlers In A Script

Using Scripts in TestScenarios

JSR 223 and Class Instances

eMail Protocol Handler

Send a Simple Email Message

Receive and Delete Email Messages

Send an Email Message with File Attachments

Receive an Email with File Attachment

 MailProtocol Handler Implementation

Test Agent Scripts

Examples

Read the Flex Tutorial for an example Jython test script.


Jython Scripting Language

This is an overview and introduction to the TestMaker scripting language.

  • This Script Guide is mostly derived from a Python Tutorial by Guido van Rossum and Fred L. Drake, Jr. that appears on the Python web site . The original covers a few additional topics about the scripting language that are not present in this manual.
Data Types

Let us begin by understanding the data types possible in the scripting language interpreter.

Numbers

The script language interpreter acts as a simple calculator: you can type an expression at it and it will write the value. Expression syntax is straightforward: the operators +, -, * and / work just like in most other languages (for example, Pascal or C); parentheses can be used for grouping. For example:

 

>>> 2+2

4

>>> # This is a comment

... 2+2

4

>>> 2+2 # and a comment on the same line as code

4

>>> (50-5*6)/4

5

>>> # Integer division returns the floor:

... 7/3

2

>>> 7/-3

-3

Like in C, the equal sign ( = ) is used to assign a value to a variable. The value of an assignment is not written:

 

>>> width = 20

>>> height = 5*9

>>> width * height

900

A value can be assigned to several variables simultaneously:

 

>>> x = y = z = 0 # Zero x, y and z

>>> x

0

>>> y

0

>>> z

0

There is full support for floating point; operators with mixed type operands convert the integer operand to floating point: 

 

>>> 3 * 3.75 / 1.5

7.5

>>> 7.0 / 2

3.5

Complex numbers are also supported; imaginary numbers are written with a suffix of j or J . Complex numbers with a nonzero real component are written as ( real+imagj , or can be created with the complex(real, imag) function.

 

>>> 1j * 1J

(-1+0j)

>>> 1j * complex(0,1)

(-1+0j)

>>> 3+1j*3

(3+3j)

>>> (3+1j)*3

(9+3j)

>>> (1+2j)/(1+1j)

(1.5+0.5j)

Complex numbers are always represented as two floating point numbers, the real and imaginary part. To extract these parts from a complex number z , use z.real and z.imag .

 

>>> a=1.5+0.5j

>>> a.real

1.5

>>> a.imag

0.5

The conversion functions to floating point and integer ( float( ), int( ) and long( ) ) don't work for complex numbers -- there is no one correct way to convert a complex number to a real number. Use abs(z) to get its magnitude (as a float) or z.real to get its real part.

 

>>> a=3.0+4.0j

>>> float(a)

Traceback (most recent call last):

File "<stdin>", line 1, in ?

TypeError: can't convert complex to float; use e.g. abs(z)

>>> a.real

3.0

>>> a.imag

4.0

>>> abs(a) # sqrt(a.real**2 + a.imag**2)

5.0

>>>

In interactive mode, the last printed expression is assigned to the variable “_” . This means that when you are using Python as a desk calculator, it is somewhat easier to continue calculations, for example:

 

>>> tax = 12.5 / 100

>>> price = 100.50

>>> price * tax

12.5625

>>> price + _

113.0625

>>> round(_, 2)

113.06

>>>

This variable should be treated as read-only by the user. Don't explicitly assign a value to it -- you would create an independent local variable with the same name masking the built-in variable with its magic behavior.

Strings

Besides numbers, Python can also manipulate strings, which can be expressed in several ways. They can be enclosed in single quotes or double quotes:

 

>>> 'spam eggs'

'spam eggs'

>>> 'doesn\'t'

"doesn't"

>>> "doesn't"

"doesn't"

>>> '"Yes," he said.'

'"Yes," he said.'

>>> "\"Yes,\" he said."

'"Yes," he said.'

>>> '"Isn\'t," she said.'

'"Isn\'t," she said.'

String literals can span multiple lines in several ways. Continuation lines can be used, with a backslash as the last character on the line indicating that the next line is a logical continuation of the line:

 

hello = "This is a rather long string containing\n\

several lines of text just as you would do in C.\n\

Note that whitespace at the beginning of the line is\

significant."

print hello

Note that newlines would still need to be embedded in the string using \n ; the newline following the trailing backslash is discarded. This example would print the following:

 

This is a rather long string containing

several lines of text just as you would do in C.

Note that whitespace at the beginning of the line is significant.

If we make the string literal a raw string, however, the \n sequences are not converted to newlines, but the backslash at the end of the line and the newline character in the source, are both included in the string as data. Thus, the example:

 

hello = r"This is a rather long string containing\n\

several lines of text much as you would do in C."

print hello

would print:

 

This is a rather long string containing\n\

several lines of text much as you would do in C.

Or, strings can be surrounded in a pair of matching triple-quotes: """ or ''' . End of lines do not need to be escaped when using triple-quotes, but they will be included in the string.

 

print """

Usage: thingy [OPTIONS]

-h Display this usage message

-H hostname Hostname to connect to

"""

produces the following output:

 

Usage: thingy [OPTIONS]

-h Display this usage message

-H hostname Hostname to connect to

The interpreter prints the result of string operations in the same way as they are typed for input: inside quotes and with quotes and other funny characters escaped by backslashes, to show the precise value. The string is enclosed in double quotes if the string contains a single quote and no double quotes, else it's enclosed in single quotes. (The print statement, described later, can be used to write strings without quotes or escapes.)

Strings can be concatenated (glued together) with the + operator, and repeated with * :

 

>>> word = 'Help' + 'A'

>>> word

'HelpA'

>>> '<' + word*5 + '>'

'<HelpAHelpAHelpAHelpAHelpA>'

Two string literals next to each other are automatically concatenated; the first line above could also have been written word = 'Help' A ; this only works with two literals, not with arbitrary string expressions:

 

>>> import string

>>> 'str' 'ing' # <- This is ok

'string'

>>> string.strip('str') + 'ing' # <- This is ok

'string'

>>> string.strip('str') 'ing' # <- This is invalid

File "<stdin>", line 1, in ?

string.strip('str') 'ing'

^

SyntaxError: invalid syntax

Strings can be subscripted (indexed); like in C, the first character of a string has subscript (index) 0. There is no separate character type; a character is simply a string of size one. Like in Icon, substrings can be specified with the slice notation: two indices separated by a colon.

 

>>> word[4]

'A'

>>> word[0:2]

'He'

>>> word[2:4]

'lp'

Unlike a C string, Python strings cannot be changed. Assigning to an indexed position in the string results in an error:

 

>>> word[0] = 'x'

Traceback (most recent call last):

File "<stdin>", line 1, in ?

TypeError: object doesn't support item assignment

>>> word[:1] = 'Splat'

Traceback (most recent call last):

File "<stdin>", line 1, in ?

TypeError: object doesn't support slice assignment

However, creating a new string with the combined content is easy and efficient:

 

>>> 'x' + word[1:]

'xelpA'

>>> 'Splat' + word[4]

'SplatA'

Slice indices have useful defaults; an omitted first index defaults to zero, an omitted second index defaults to the size of the string being sliced.

 

>>> word[:2] # The first two characters

'He'

>>> word[2:] # All but the first two characters

'lpA'

Here's a useful invariant of slice operations: s[:i] + s[i:] equals s .

>>> word[:2] + word[2:]

'HelpA'

>>> word[:3] + word[3:]

'HelpA'

Degenerate slice indices are handled gracefully: an index that is too large is replaced by the string size, an upper bound smaller than the lower bound returns an empty string.

 

>>> word[1:100]

'elpA'

>>> word[10:]

''

>>> word[2:1]

''

Indices may be negative numbers, to start counting from the right. For example:

 

>>> word[-1] # The last character

'A'

>>> word[-2] # The last-but-one character

'p'

>>> word[-2:] # The last two characters

'pA'

>>> word[:-2] # All but the last two characters

'Hel'

But note that -0 is really the same as 0 , so it does not count from the right!

 

>>> word[-0] # (since -0 equals 0)

'H'

Out-of-range negative slice indices are truncated, but don't try this for single-element (non-slice) indices:

 

>>> word[-100:]

'HelpA'

>>> word[-10] # error

Traceback (most recent call last):

File "<stdin>", line 1, in ?

IndexError: string index out of range

The best way to remember how slices work is to think of the indices as pointing between characters, with the left edge of the first character numbered 0 . Then the right edge of the last character of a string of n characters has index n , for example:

 

+---+---+---+---+---+

| H | e | l | p | A |

+---+---+---+---+---+

0 1 2 3 4 5

-5 -4 -3 -2 -1

The first row of numbers gives the position of the indices 0...5 in the string; the second row gives the corresponding negative indices. The slice from i to j consists of all characters between the edges labeled i and j , respectively.

For non-negative indices, the length of a slice is the difference of the indices, if both are within bounds. For example, the length of word[1:3] is 2 .

The built-in function len( ) returns the length of a string:

 

>>> s = 'supercalifragilisticexpialidocious'

>>> len(s)

34

Unicode Strings

Starting with Python 2.0 a new data type for storing text data is available to the programmer: the Unicode object. It can be used to store and manipulate Unicode data (see http://www.unicode.org/ ) and integrates well with the existing string objects providing auto-conversions where necessary.

Unicode has the advantage of providing one ordinal for every character in every script used in modern and ancient texts. Previously, there were only 256 possible ordinals for script characters and texts were typically bound to a code page which mapped the ordinals to script characters. This lead to very much confusion especially with respect to internationalization (usually written as i18n -- i + 18 characters + n ) of software. Unicode solves these problems by defining one code page for all scripts.

Creating Unicode strings in Python is just as simple as creating normal strings:

 

>>> u'Hello World !'

u'Hello World !'

The small u in front of the quote indicates that an Unicode string is supposed to be created. If you want to include special characters in the string, you can do so by using the Python Unicode-Escape encoding. The following example shows how:

 

>>> u'Hello\u0020World !'

u'Hello World !'

The escape sequence \u0020 indicates to insert the Unicode character with the ordinal value 0x0020 (the space character) at the given position.

Other characters are interpreted by using their respective ordinal values directly as Unicode ordinals. If you have literal strings in the standard Latin-1 encoding that is used in many Western countries, you will find it convenient that the lower 256 characters of Unicode are the same as the 256 characters of Latin-1.

For experts, there is also a raw mode just like the one for normal strings. You have to prefix the opening quote with ur to have Python use the Raw-Unicode-Escape encoding. It will only apply the above \uXXXX conversion if there is an uneven number of backslashes in front of the small u .

 

>>> ur'Hello\u0020World !'

u'Hello World !'

>>> ur'Hello\\u0020World !'

u'Hello\\\\u0020World !'

The raw mode is most useful when you have to enter lots of backslashes, as can be necessary in regular expressions.

Apart from these standard encodings, Python provides a whole set of other ways of creating Unicode strings on the basis of a known encoding.

The built-in function unicode( ) provides access to all registered Unicode codecs (COders and DECoders). Some of the more well known encodings which these codecs can convert are Latin-1, ASCII, UTF-8, and UTF-16 . The latter two are variable-length encodings that store each Unicode character in one or more bytes. The default encoding is normally set to ASCII, which passes through characters in the range 0 to 127 and rejects any other characters with an error. When an Unicode string is printed, written to a file, or converted with str( ) , conversion takes place using this default encoding.

 

>>> u"abc"

u'abc'

>>> str(u"abc")

'abc'

>>> u"äöü"

u'\xe4\xf6\xfc'

>>> str(u"äöü")

Traceback (most recent call last):

File "<stdin>", line 1, in ?

UnicodeError: ASCII encoding error: ordinal not in range(128)

To convert a Unicode string into an 8-bit string using a specific encoding, Unicode objects provide an encode( ) method that takes one argument, the name of the encoding. Lowercase names for encodings are preferred.

 

>>> u"äöü".encode('utf-8')

'\xc3\xa4\xc3\xb6\xc3\xbc'

If you have data in a specific encoding and want to produce a corresponding Unicode string from it, you can use the unicode( ) function with the encoding name as the second argument.

 

>>> unicode('\xc3\xa4\xc3\xb6\xc3\xbc', 'utf-8')

u'\xe4\xf6\xfc'

 

Lists

Python knows a number of compound data types, used to group together other values. The most versatile is the list, which can be written as a list of comma-separated values (items) between square brackets. List items need not all have the same type.

 

>>> a = ['spam', 'eggs', 100, 1234]

>>> a

['spam', 'eggs', 100, 1234]

Like string indices, list indices start at 0 , and lists can be sliced, concatenated and so on:

 

>>> a[0]

'spam'

>>> a[3]

1234

>>> a[-2]

100

>>> a[1:-1]

['eggs', 100]

>>> a[:2] + ['bacon', 2*2]

['spam', 'eggs', 'bacon', 4]

>>> 3*a[:3] + ['Boe!']

['spam', 'eggs', 100, 'spam', 'eggs', 100, 'spam', 'eggs', 100, 'Boe!']

Unlike strings, which are immutable, it is possible to change individual elements of a list:

 

>>> a

['spam', 'eggs', 100, 1234]

>>> a[2] = a[2] + 23

>>> a

['spam', 'eggs', 123, 1234]

Assignment to slices is also possible and this can even change the size of the list:

 

>>> # Replace some items:

... a[0:2] = [1, 12]

>>> a

[1, 12, 123, 1234]

>>> # Remove some:

... a[0:2] = []

>>> a

[123, 1234]

>>> # Insert some:

... a[1:1] = ['bletch', 'xyzzy']

>>> a

[123, 'bletch', 'xyzzy', 1234]

>>> a[:0] = a # Insert (a copy of) itself at the beginning

>>> a

[123, 'bletch', 'xyzzy', 1234, 123, 'bletch', 'xyzzy', 1234]

The built-in function len( ) also applies to lists:

 

>>> len(a)

8

It is possible to nest lists (create lists containing other lists), for example:

 

>>> q = [2, 3]

>>> p = [1, q, 4]

>>> len(p)

3

>>> p[1]

[2, 3]

>>> p[1][0]

2

>>> p[1].append('xtra') # See section 5.1

>>> p

[1, [2, 3, 'xtra'], 4]

>>> q

[2, 3, 'xtra']

Note that in the last example, p[1] and q really refer to the same object.

First Steps Towards Programming

Of course, we can use Python for more complicated tasks than adding two and two together. For instance, we can write an initial sub-sequence of the Fibonacci series as follows:

 

>>> # Fibonacci series:

... # the sum of two elements defines the next

... a, b = 0, 1

>>> while b < 10:

... print b

... a, b = b, a+b

...

1

1

2

3

5

8

This example introduces several new features.

  • • The first line contains a multiple assignment: the variables a and b simultaneously get the new values 0 and 1. On the last line this is used again, demonstrating that the expressions on the right-hand side are all evaluated first before any of the assignments take place. The right-hand side expressions are evaluated from the left to the right.
  • • The while loop executes as long as the condition (here: b < 10) remains true. In Python, like in C, any non-zero integer value is true; zero is false. The condition may also be a string or list value, in fact any sequence; anything with a non-zero length is true, empty sequences are false. The test used in the example is a simple comparison. The standard comparison operators are written the same as in C: < (less than), > (greater than), == (equal to), <= (less than or equal to), >= (greater than or equal to) and != (not equal to).
  • • The body of the loop is indented: indentation is Python's way of grouping statements. Python currently does not provide an intelligent input line editing facility, so you have to type a tab or space(s) for each indented line. In practice you will prepare more complicated input for Python with a text editor; most text editors have an auto-indent facility. When a compound statement is entered interactively, it must be followed by a blank line to indicate completion (since the parser cannot guess when you have typed the last line).
  • Each line within a basic block must be indented by the same amount.
  • • The print statement writes the value of the expression(s) it is given. It differs from just writing the expression you want to write (as we did earlier in the calculator examples) in the way it handles multiple expressions and strings. Strings are printed without quotes and a space is inserted between items, so you can format things nicely, like this:

 

>>> i = 256*256

>>> print 'The value of i is', i

The value of i is 65536

A trailing comma avoids the newline after the output:

 

>>> a, b = 0, 1

>>> while b < 1000:

... print b,

... a, b = b, a+b

...

1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987

  • The interpreter inserts a newline before it prints the next prompt if the last line was not completed.
Flow Control

Besides the while statement just introduced, Python knows the usual control flow statements known from other languages, with some twists.

if

Perhaps the most well-known statement type is the if statement. For example:

  

>>> x = int(raw_input("Please enter an integer: "))

>>> if x < 0:

... x = 0

... print 'Negative changed to zero'

... elif x == 0:

... print 'Zero'

... elif x == 1:

... print 'Single'

... else:

... print 'More'

...

There can be zero or more elif parts, and the else part is optional. The keyword elif is short for else if and is useful to avoid excessive indentation. An if ... elif ... elif ... sequence is a substitute for the switch or case statements found in other languages.

for

The for statement in Python differs a bit from what you may be used to in C or Pascal. Rather than always iterating over an arithmetic progression of numbers (like in Pascal), or giving the user the ability to define both the iteration step and halting condition (as C), Python's for  statement iterates over the items of any sequence (a list or a string), in the order that they appear in the sequence. For example:

 

>>> # Measure some strings:

... a = ['cat', 'window', 'defenestrate']

>>> for x in a:

... print x, len(x)

...

cat 3

window 6

defenestrate 12

It is not safe to modify the sequence being iterated over in the loop (this can only happen for mutable sequence types, such as lists). If you need to modify the list you are iterating over (for example, to duplicate selected items) you must iterate over a copy. The slice notation makes this particularly convenient:

 

>>> for x in a[:]: # make a slice copy of the entire list

... if len(x) > 6: a.insert(0, x)

...

>>> a

['defenestrate', 'cat', 'window', 'defenestrate']

 

range ( )

If you do need to iterate over a sequence of numbers, the built-in function range( ) comes in handy. It generates lists containing arithmetic progressions:

 

>>> range(10)

[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]

The given end point is never part of the generated list; range(10) generates a list of 10 values, exactly the legal indices for items of a sequence of length 10. It is possible to let the range start at another number, or to specify a different increment (even negative; sometimes this is called the step ):

 

>>> range(5, 10)

[5, 6, 7, 8, 9]

>>> range(0, 10, 3)

[0, 3, 6, 9]

>>> range(-10, -100, -30)

[-10, -40, -70]

To iterate over the indices of a sequence, combine range( ) and len( ) as follows:

 

>>> a = ['Mary', 'had', 'a', 'little', 'lamb']

>>> for i in range(len(a)):

... print i, a[i]

...

0 Mary

1 had

2 a

3 little

4 lamb

 

break, continue, else loops

The break statement, like in C, breaks out of the smallest enclosing for or while loop.

The continue statement, also borrowed from C, continues with the next iteration of the loop.

Loop statements may have an else clause; it is executed when the loop terminates through exhaustion of the list (with for ) or when the condition becomes false (with while ), but not when the loop is terminated by a break statement. This is exemplified by the following loop, which searches for prime numbers:

 

>>> for n in range(2, 10):

... for x in range(2, n):

... if n % x == 0:

... print n, 'equals', x, '*', n/x

... break

... else:

... # loop fell through without finding a factor

... print n, 'is a prime number'

...

2 is a prime number

3 is a prime number

4 equals 2 * 2

5 is a prime number

6 equals 2 * 3

7 is a prime number

8 equals 2 * 4

9 equals 3 * 3

 

pass

The pass statement does nothing. It can be used when a statement is required syntactically but the program requires no action. For example:

 

>>> while 1:

... pass # Busy-wait for keyboard interrupt

...

sys.exit(0)

The sys object holds many functions, including the exit method to discontinue the currently running agent. By calling sys.exit(0) you stop the currently running agent. The agent returns the numeric value in the exit( ) parameter. To access sys.exit(0) requires you import sys first.

 

>>> import sys

>>> sys.exit(0)

...

 

Defining Functions

We can create a function that writes the Fibonacci series to an arbitrary boundary:

 

>>> def fib(n): # write Fibonacci series up to n

... """Print a Fibonacci series up to n."""

... a, b = 0, 1

... while b < n:

... print b,

... a, b = b, a+b

...

>>> # Now call the function we just defined:

... fib(2000)

1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597

The keyword def introduces a function definition. It must be followed by the function name and the parenthesized list of formal parameters. The statements that form the body of the function start at the next line and must be indented. The first statement of the function body can optionally be a string literal; this string literal is the function's  documentation string, or docstring

There are tools which use docstrings to automatically produce online or printed documentation, or to let the user interactively browse through code; it's good practice to include docstrings in code that you write, so try to make a habit of it.

The execution of a function introduces a new symbol table used for the local variables of the function. More precisely, all variable assignments in a function store the value in the local symbol table; whereas variable references first look in the local symbol table, then in the global symbol table and then in the table of built-in names. Thus, global variables cannot be directly assigned a value within a function (unless named in a global statement), although they may be referenced.

The actual parameters (arguments) to a function call are introduced in the local symbol table of the called function when it is called; thus, arguments are passed using call by value (where the value is always an object reference, not the value of the object). When a function calls another function, a new local symbol table is created for that call.

A function definition introduces the function name in the current symbol table. The value of the function name has a type that is recognized by the interpreter as a user-defined function. This value can be assigned to another name which can then also be used as a function. This serves as a general renaming mechanism:

 

>>> fib

<function object at 10042ed0>

>>> f = fib

>>> f(100)

1 1 2 3 5 8 13 21 34 55 89

You might object that fib is not a function but a procedure. In Python, like in C, procedures are just functions that don't return a value. In fact, technically speaking, procedures do return a value, albeit a rather boring one. This value is called None (it's a built-in name). Writing the value None is normally suppressed by the interpreter if it would be the only value written. You can see it if you really want to:

 

>>> print fib(0)

None

It is simple to write a function that returns a list of the numbers of the Fibonacci series, instead of printing it:

 

>>> def fib2(n): # return Fibonacci series up to n

... """Return a list containing the Fibonacci series up to n."""

... result = []

... a, b = 0, 1

... while b < n:

... result.append(b) # see below

... a, b = b, a+b

... return result

...

>>> f100 = fib2(100) # call it

>>> f100 # write the result

[1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89]

This example, as usual, demonstrates some new Python features:

  • • The return statement returns with a value from a function. return without an expression argument returns None . Falling off the end of a procedure also returns None .
  • • The statement result.append(b) calls a method of the list object result . A method is a function that `belongs' to an object and is named obj.methodname , where obj is some object (this may be an expression), and methodname is the name of a method that is defined by the object's type. Different types define different methods. Methods of different types may have the same name without causing ambiguity (It is possible to define your own object types and methods, using classes, as discussed later in this tutorial). The method append( ) shown in the example, is defined for list objects; it adds a new element at the end of the list. In this example it is equivalent to result = result + [b] , but more efficient.
More on Defining Functions

It is also possible to define functions with a variable number of arguments. There are three forms, which can be combined.

Default Argument Values

The most useful form is to specify a default value for one or more arguments. This creates a function that can be called with fewer arguments than it is defined

 

def ask_ok(prompt, retries=4, complaint='Yes or no, please!'):

while 1:

ok = raw_input(prompt)

if ok in ('y', 'ye', 'yes'): return 1

if ok in ('n', 'no', 'nop', 'nope'): return 0

retries = retries - 1

if retries < 0: raise IOError, 'refusenik user'

print complaint

This function can be called either like this:

 

ask_ok('Do you really want to quit?') or like this: ask_ok('OK to overwrite the file?', 2) .

The default values are evaluated at the point of function definition in the defining scope, so that

 

i = 5

def f(arg=i):

print arg

i = 6

f( )

will print 5 .

  • The default value is evaluated only once. This makes a difference when the default is a mutable object such as a list or dictionary. For example, the following function accumulates the arguments passed to it on subsequent calls:

 

def f(a, L=[]):

L.append(a)

return L

print f(1)

print f(2)

print f(3)

This will print

 

[1]

[1, 2]

[1, 2, 3]

If you don't want the default to be shared between subsequent calls, you can write the function like this instead:

 

def f(a, L=None):

if L is None:

L = []

L.append(a)

return L

 

Keyword Arguments

Functions can also be called using keyword arguments of the form keyword = value . For instance, the following function:

 

def parrot(voltage, state='a stiff', action='voom', type='Norwegian Blue'):

print "-- This parrot wouldn't", action,

print "if you put", voltage, "Volts through it."

print "-- Lovely plumage, the", type

print "-- It's", state, "!"

could be called in any of the following ways:

 

parrot(1000)

parrot(action = 'VOOOOOM', voltage = 1000000)

parrot('a thousand', state = 'pushing up the daisies')

parrot('a million', 'bereft of life', 'jump')

but the following calls would all be invalid:

 

parrot( ) # required argument missing

parrot(voltage=5.0, 'dead') # non-keyword argument following keyword

parrot(110, voltage=220) # duplicate value for argument

parrot(actor='John Cleese') # unknown keyword

In general, an argument list must have any positional arguments followed by any keyword arguments, where the keywords must be chosen from the formal parameter names. It's not important whether a formal parameter has a default value or not. No argument may receive a value more than once -- formal parameter names corresponding to positional arguments cannot be used as keywords in the same calls. Here's an example that fails due to this restriction:

 

>>> def function(a):

... pass

...

>>> function(0, a=0)

Traceback (most recent call last):

File "<stdin>", line 1, in ?

TypeError: keyword parameter redefined

When a final formal parameter of the form **name is present, it receives a dictionary containing all keyword arguments whose keyword doesn't correspond to a formal parameter. This may be combined with a formal parameter of the form *name (described in the next subsection) which receives a tuple containing the positional arguments beyond the formal parameter list. ( *name must occur before **name .) For example, if we define a function like this:

 

def cheeseshop(kind, *arguments, **keywords):

print "-- Do you have any", kind, '?'

print "-- I'm sorry, we're all out of", kind

for arg in arguments: print arg

print '-'*40

for kw in keywords.keys( ): print kw, ':', keywords[kw]

It could be called like this:

 

cheeseshop('Limburger', "It's very runny, sir.",

"It's really very, VERY runny, sir.",

client='John Cleese',

shopkeeper='Michael Palin',

sketch='Cheese Shop Sketch')

and of course it would print:

 

-- Do you have any Limburger ?

-- I'm sorry, we're all out of Limburger

It's very runny, sir.

It's really very, VERY runny, sir.

----------------------------------------

client : John Cleese

shopkeeper : Michael Palin

sketch : Cheese Shop Sketch

 

Arbitrary Argument Lists

Finally, the least frequently used option is to specify that a function can be called with an arbitrary number of arguments. These arguments will be wrapped up in a tuple. Before the variable number of arguments, zero or more normal arguments may occur.

 

def fprintf(file, format, *args):

file.write(format % args)

 

Lambda Forms

By popular demand, a few features commonly found in functional programming languages and Lisp have been added to Python. With the lambda keyword, small anonymous functions can be created. Here's a function that returns the sum of its two arguments: lambda a, b: a+b . Lambda forms can be used wherever function objects are required. They are syntactically restricted to a single expression. Semantically, they are just syntactic sugar for a normal function definition. Like nested function definitions, lambda forms can reference variables from the containing scope:

 

>>> def make_incrementor(n):

... return lambda x: x + n

...

>>> f = make_incrementor(42)

>>> f(0)

42

>>> f(1)

43

 

Documentation Strings

There are emerging conventions about the content and formatting of documentation strings.  

The first line should always be a short, concise summary of the object's purpose. For brevity, it should not explicitly state the object's name or type, since these are available by other means (except if the name happens to be a verb describing a function's operation). This line should begin with a capital letter and end with a period.

If there are more lines in the documentation string, the second line should be blank, visually separating the summary from the rest of the description. The following lines should be one or more paragraphs describing the object's calling conventions, its side effects, etc.

The Python parser does not strip indentation from multi-line string literals in Python, so tools that process documentation have to strip indentation if desired. This is done using the following convention. The first non-blank line after the first line of the string determines the amount of indentation for the entire documentation string. We can't use the first line since it is generally adjacent to the string's opening quotes so its indentation is not apparent in the string literal. Whitespace “equivalent'' to this indentation is then stripped from the start of all lines of the string. Lines that are indented less should not occur, but if they occur all their leading whitespace should be stripped. Equivalence of whitespace should be tested after expansion of tabs (to 8 spaces, normally).

Here is an example of a multi-line docstring:

 

>>> def my_function( ):

... """Do nothing, but document it.

...

... No, really, it doesn't do anything.

... """

... pass

...

>>> print my_function._ _doc_ _

Do nothing, but document it.

 

No, really, it doesn't do anything.

 

Data Structures

More on Lists

The list data type has some more methods. Here are all of the methods of list objects:

append(x)

Add an item to the end of the list; equivalent to a[len(a):] = [x] .

extend(L)

Extend the list by appending all the items in the given list; equivalent to a[len(a):] = L .

insert(i,x)

Insert an item at a given position. The first argument is the index of the element before which to insert, so a.insert(0, x) inserts at the front of the list and a.insert(len(a), x) is equivalent to a.append(x) .

remove(x)

Remove the first item from the list whose value is x . It is an error if there is no such item.

pop([i])

Remove the item at the given position in the list and return it. If no index is specified, a.pop( ) returns the last item in the list. The item is also removed from the list.

index(x)

Return the index in the list of the first item whose value is x . It is an error if there is no such item.

count(x)

Return the number of times x appears in the list.

sort( )

Sort the items of the list, in place.

reverse( )

Reverse the elements of the list, in place.

An example that uses most of the list methods:

 

>>> a = [66.6, 333, 333, 1, 1234.5]

>>> print a.count(333), a.count(66.6), a.count('x')

2 1 0

>>> a.insert(2, -1)

>>> a.append(333)

>>> a

[66.6, 333, -1, 333, 1, 1234.5, 333]

>>> a.index(333)

1

>>> a.remove(333)

>>> a

[66.6, -1, 333, 1, 1234.5, 333]

>>> a.reverse( )

>>> a

[333, 1234.5, 1, 333, -1, 66.6]

>>> a.sort( )

>>> a

[-1, 1, 66.6, 333, 333, 1234.5]

 

Using Lists as Stacks

The list methods make it very easy to use a list as a stack, where the last element added is the first element retrieved (“last-in, first-out''). To add an item to the top of the stack, use append( ) . To retrieve an item from the top of the stack, use pop( ) without an explicit index. For example:

 

>>> stack = [3, 4, 5]

>>> stack.append(6)

>>> stack.append(7)

>>> stack

[3, 4, 5, 6, 7]

>>> stack.pop( )

7

>>> stack

[3, 4, 5, 6]

>>> stack.pop( )

6

>>> stack.pop( )

5

>>> stack

[3, 4]

 

Using Lists as Queues

You can also use a list conveniently as a queue, where the first element added is the first element retrieved (“first-in, first-out''). To add an item to the back of the queue, use append( ) . To retrieve an item from the front of the queue, use pop( ) with 0 as the index. For example:

 

>>> queue = ["Eric", "John", "Michael"]

>>> queue.append("Terry") # Terry arrives

>>> queue.append("Graham") # Graham arrives

>>> queue.pop(0)

'Eric'

>>> queue.pop(0)

'John'

>>> queue

['Michael', 'Terry', 'Graham']

 

Functional Programming Tools

There are three built-in functions that are very useful when used with lists: filter( ), map( ), and reduce( ) .

filter(function, sequence) returns a sequence (of the same type, if possible) consisting of those items from the sequence for which function(item) is true. For example, to compute some primes:

 

>>> def f(x): return x % 2 != 0 and x % 3 != 0

...

>>> filter(f, range(2, 25))

[5, 7, 11, 13, 17, 19, 23]

map(function, sequence) calls function(item) for each of the sequence's items and returns a list of the return values. For example, to compute some cubes:

 

>>> def cube(x): return x*x*x

...

>>> map(cube, range(1, 11))

[1, 8, 27, 64, 125, 216, 343, 512, 729, 1000]

More than one sequence may be passed; the function must then have as many arguments as there are sequences and is called with the corresponding item from each sequence (or None if some sequence is shorter than another). If None is passed for the function, a function returning its argument(s) is substituted.

Combining these two special cases, we see that map(None, list1, list2) is a convenient way of turning a pair of lists into a list of pairs. For example:

 

>>> seq = range(8)

>>> def square(x): return x*x

...

>>> map(None, seq, map(square, seq))

[(0, 0), (1, 1), (2, 4), (3, 9), (4, 16), (5, 25), (6, 36), (7, 49)]

reduce(func, sequence) returns a single value constructed by calling the binary function func on the first two items of the sequence, then on the result and the next item, and so on. For example, to compute the sum of the numbers 1 through 10:

>>> def add(x,y): return x+y

...

>>> reduce(add, range(1, 11))

55

If there's only one item in the sequence, its value is returned; if the sequence is empty, an exception is raised.

A third argument can be passed to indicate the starting value. In this case the starting value is returned for an empty sequence and the function is first applied to the starting value and the first sequence item, then to the result and the next item and so on. For example,

 

>>> def sum(seq):

... def add(x,y): return x+y

... return reduce(add, seq, 0)

...

>>> sum(range(1, 11))

55

>>> sum([])

0

 

List Comprehensions

List comprehensions provide a concise way to create lists without resorting to use of map( ), filter( ) and/or lambda. The resulting list definition tends often to be clearer than lists built using those constructs. Each list comprehension consists of an expression following by a for clause, then zero or more for or if clauses. The result will be a list resulting from evaluating the expression in the context of the for and if clauses which follow it. If the expression would evaluate to a tuple, it must be parenthesized.

 

>>> freshfruit = [' banana', ' loganberry ', 'passion fruit ']

>>> [weapon.strip( ) for weapon in freshfruit]

['banana', 'loganberry', 'passion fruit']

>>> vec = [2, 4, 6]

>>> [3*x for x in vec]

[6, 12, 18]

>>> [3*x for x in vec if x > 3]

[12, 18]

>>> [3*x for x in vec if x < 2]

[]

>>> [{x: x**2} for x in vec]

[{2: 4}, {4: 16}, {6: 36}]

>>> [[x,x**2] for x in vec]

[[2, 4], [4, 16], [6, 36]]

>>> [x, x**2 for x in vec] # error - parens required for tuples

File "<stdin>", line 1, in ?

[x, x**2 for x in vec]

^

SyntaxError: invalid syntax

>>> [(x, x**2) for x in vec]

[(2, 4), (4, 16), (6, 36)]

>>> vec1 = [2, 4, 6]

>>> vec2 = [4, 3, -9]

>>> [x*y for x in vec1 for y in vec2]

[8, 6, -18, 16, 12, -36, 24, 18, -54]

>>> [x+y for x in vec1 for y in vec2]

[6, 5, -7, 8, 7, -5, 10, 9, -3]

>>> [vec1[i]*vec2[i] for i in range(len(vec1))]

[8, 12, -54]

 

del

There is a way to remove an item from a list given its index instead of its value: the del statement. This can also be used to remove slices from a list (which we did earlier by assignment of an empty list to the slice). For example:

 

>>> a

[-1, 1, 66.6, 333, 333, 1234.5]

>>> del a[0]

>>> a

[1, 66.6, 333, 333, 1234.5]

>>> del a[2:4]

>>> a

[1, 66.6, 1234.5]

del can also be used to delete entire variables:

 

>>> del a

Referencing the name a hereafter is an error (at least until another value is assigned to it).

Tuples and Sequences

We saw that lists and strings have many common properties, such as indexing and slicing operations. They are two examples of sequence data types. Since Python is an evolving language, other sequence data types may be added. There is also another standard sequence data type: the tuple.

A tuple consists of a number of values separated by commas, for instance:

 

>>> t = 12345, 54321, 'hello!'

>>> t[0]

12345

>>> t

(12345, 54321, 'hello!')

>>> # Tuples may be nested:

... u = t, (1, 2, 3, 4, 5)

>>> u

((12345, 54321, 'hello!'), (1, 2, 3, 4, 5))

As you see, output tuples are alway enclosed in parentheses, so that nested tuples are interpreted correctly; they may be input with or without surrounding parentheses, although often parentheses are necessary anyway (if the tuple is part of a larger expression).

Tuples have many uses. For example: ( x, y) coordinate pairs, employee records from a database, etc. Tuples, like strings, are immutable: it is not possible to assign to the individual items of a tuple (you can simulate much of the same effect with slicing and concatenation, though). It is also possible to create tuples which contain mutable objects, such as lists.

A special problem is the construction of tuples containing 0 or 1 items: the syntax has some extra quirks to accommodate these. Empty tuples are constructed by an empty pair of parentheses; a tuple with one item is constructed by following a value with a comma (it is not sufficient to enclose a single value in parentheses). Ugly, but effective. For example:

 

>>> empty = ( )

>>> singleton = 'hello', # <-- note trailing comma

>>> len(empty)

0

>>> len(singleton)

1

>>> singleton

('hello',)

The statement t = 12345, 54321, 'hello!' is an example of tuple packing: the values 12345, 54321 and ' hello!' are packed together in a tuple. The reverse operation is also possible:

 

>>> x, y, z = t

This is called, appropriately enough, sequence unpacking. Sequence unpacking requires that the list of variables on the left have the same number of elements as the length of the sequence.

  • The multiple assignment is really just a combination of tuple packing and sequence unpacking!

There is a small bit of asymmetry here: packing multiple values always creates a tuple and unpacking works for any sequence.

Dictionaries

Another useful data type built into Python is the dictionary. Dictionaries are sometimes found in other languages as”associative memories'' or “associative arrays''. Unlike sequences, which are indexed by a range of numbers, dictionaries are indexed by keys , which can be any immutable type; strings and numbers can always be keys. Tuples can be used as keys if they contain only strings, numbers or tuples; if a tuple contains any mutable object either directly or indirectly, it cannot be used as a key. You can't use lists as keys, since lists can be modified in place using their append( ) and extend( ) methods, as well as slice and indexed assignments.

It is best to think of a dictionary as an unordered set of key: value pairs, with the requirement that the keys are unique (within one dictionary). A pair of braces creates an empty dictionary: { } . Placing a comma-separated list of key:value pairs within the braces adds initial key:value pairs to the dictionary; this is also the way dictionaries are written on output.

The main operations on a dictionary are storing a value with some key and extracting the value given the key. It is also possible to delete a key:value pair with del . If you store using a key that is already in use, the old value associated with that key is forgotten. It is an error to extract a value using a non-existent key.

The keys( ) method of a dictionary object returns a list of all the keys used in the dictionary, in random order (if you want it sorted, just apply the sort( ) method to the list of keys). To check whether a single key is in the dictionary, use the has_key( ) method of the dictionary.

Here is a small example using a dictionary:

 

>>> tel = {'jack': 4098, 'sape': 4139}

>>> tel['guido'] = 4127

>>> tel

{'sape': 4139, 'guido': 4127, 'jack': 4098}

>>> tel['jack']

4098

>>> del tel['sape']

>>> tel['irv'] = 4127

>>> tel

{'guido': 4127, 'irv': 4127, 'jack': 4098}

>>> tel.keys( )

['guido', 'irv', 'jack']

>>> tel.has_key('guido')

1

 

More on Conditions

The conditions used in while and if statements above can contain other operators besides comparisons.

The comparison operators in and not in check whether a value occurs (does not occur) in a sequence. The operators is and is not compare whether two objects are really the same object; this only matters for mutable objects like lists. All comparison operators have the same priority, which is lower than that of all numerical operators.

Comparisons can be chained. For example, a < b == c tests whether a is less than b and moreover b equals c .

Comparisons may be combined by the Boolean operators and and or and the outcome of a comparison (or of any other Boolean expression) may be negated with not . These all have lower priorities than comparison operators again; between them, not has the highest priority, and or the lowest, so that A and not B or C is equivalent to ( A and (not B) ) or C . Of course, parentheses can be used to express the desired composition.

The Boolean operators and and or are so-called shortcut operators: their arguments are evaluated from left to right and evaluation stops as soon as the outcome is determined. For example, if A and C are true but B is false, A and B and C does not evaluate the expression C . In general, the return value of a shortcut operator, when used as a general value and not as a Boolean, is the last evaluated argument.

It is possible to assign the result of a comparison or other Boolean expression to a variable. For example,

 

>>> string1, string2, string3 = '', 'Trondheim', 'Hammer Dance'

>>> non_null = string1 or string2 or string3

>>> non_null

'Trondheim'

  • In Python, unlike C, assignment cannot occur inside expressions. C programmers may grumble about this, but it avoids a common class of problems encountered in C programs: typing = in an expression when == was intended.
Comparing Sequences and Other Types

Sequence objects may be compared to other objects with the same sequence type. The comparison uses lexicographical ordering: first the first two items are compared and if they differ this determines the outcome of the comparison; if they are equal, the next two items are compared and so on, until either sequence is exhausted. If two items to be compared are themselves sequences of the same type, the lexicographical comparison is carried out recursively. If all items of two sequences compare equal, the sequences are considered equal. If one sequence is an initial sub-sequence of the other, the shorter sequence is the smaller (lesser) one. Lexicographical ordering for strings uses the ASCII ordering for individual characters. Some examples of comparisons between sequences with the same types:

 

(1, 2, 3) < (1, 2, 4)

[1, 2, 3] < [1, 2, 4]

'ABC' < 'C' < 'Pascal' < 'Python'

(1, 2, 3, 4) < (1, 2, 4)

(1, 2) < (1, 2, -1)

(1, 2, 3) == (1.0, 2.0, 3.0)

(1, 2, ('aa', 'ab')) < (1, 2, ('abc', 'a'), 4)

  • Comparing objects of different types is legal. The outcome is deterministic but arbitrary: the types are ordered by their name. Thus, a list is always smaller than a string, a string is always smaller than a tuple, etc. Mixed numeric types are compared according to their numeric value, so 0 equals 0.0, etc.

Modules

If you quit from the Python interpreter and enter it again, the definitions you have made (functions and variables) are lost. Therefore, if you want to write a somewhat longer program, you are better off using a text editor to prepare the input for the interpreter and running it with that file as input instead. This is known as creating a script. As your program gets longer, you may want to split it into several files for easier maintenance. You may also want to use a handy function that you've written in several programs without copying its definition into each program.

To support this, Python has a way to put definitions in a file and use them in a script or in an interactive instance of the interpreter. Such a file is called a module; definitions from a module can be imported into other modules or into the main module (the collection of variables that you have access to in a script executed at the top level and in calculator mode).

A module is a file containing Python definitions and statements. The file name is the module name with the suffix .py appended. Within a module, the module's name (as a string) is available as the value of the global variable _ _name_ _ . For instance, use your favorite text editor to create a file called fibo.py in the current directory with the following contents:

 

# Fibonacci numbers module

def fib(n): # write Fibonacci series up to n

a, b = 0, 1

while b < n:

print b,

a, b = b, a+b

def fib2(n): # return Fibonacci series up to n

result = []

a, b = 0, 1

while b < n:

result.append(b)

a, b = b, a+b

return result

Now enter the Python interpreter and import this module with the following command:

 

>>> import fibo

This does not enter the names of the functions defined in fibo directly in the current symbol table; it only enters the module name fibo there. Using the module name you can access the functions:

 

>>> fibo.fib(1000)

1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987

>>> fibo.fib2(100)

[1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89]

>>> fibo._ _name_ _

'fibo'

If you intend to use a function often you can assign it to a local name:

 

>>> fib = fibo.fib

>>> fib(500)

1 1 2 3 5 8 13 21 34 55 89 144 233 377

 

More on Modules

A module can contain executable statements as well as function definitions. These statements are intended to initialize the module. They are executed only the first time the module is imported somewhere.

Each module has its own private symbol table, which is used as the global symbol table by all functions defined in the module. Thus, the author of a module can use global variables in the module without worrying about accidental clashes with a user's global variables. On the other hand, if you know what you are doing you can touch a module's global variables with the same notation used to refer to its functions,

modname.itemname .

Modules can import other modules. It is customary but not required to place all import statements at the beginning of a module (or script, for that matter). The imported module names are placed in the importing module's global symbol table.

There is a variant of the import statement that imports names from a module directly into the importing module's symbol table. For example:

 

>>> from fibo import fib, fib2

>>> fib(500)

1 1 2 3 5 8 13 21 34 55 89 144 233 377

This does not introduce the module name from which the imports are taken in the local symbol table (so in the example, fibo is not defined).

There is even a variant to import all names that a module defines:

 

>>> from fibo import *

>>> fib(500)

1 1 2 3 5 8 13 21 34 55 89 144 233 377

This imports all names except those beginning with an underscore ( _ ).

The Module Search Path

When a module named spam is imported, the interpreter searches for a file named spam.py in the current directory and then in the list of directories specified by the environment variable PYTHONPATH. This has the same syntax as the shell variable PATH, that is, a list of directory names. When PYTHONPATH is not set, or when the file is not found there, the search continues in an installation-dependent default path; on Unix, this is usually .:/usr/local/lib/python .

Actually, modules are searched in the list of directories given by the variable sys.path which is initialized from the directory containing the input script (or the current directory), PYTHONPATH and the installation-dependent default. This allows Python programs that know what they're doing to modify or replace the module search path.

  • The directory containing the script being run is on the search path, it is important that the script not have the same name as a standard module or Python will attempt to load the script as a module when that module is imported. This will generally be an error.
Compiled Python Files

As an important speed-up of the start-up time for short programs that use a lot of standard modules, if a file called spam.pyc exists in the directory where spam.py is found, this is assumed to contain an already-”byte-compiled'' version of the module spam . The modification time of the version of spam.py used to create spam.pyc is recorded in spam.pyc , and the .pyc file is ignored if these don't match.

Normally, you don't need to do anything to create the spam.pyc file. Whenever spam.py is successfully compiled, an attempt is made to write the compiled version to spam.pyc . It is not an error if this attempt fails; if for any reason the file is not written completely, the resulting spam.pyc file will be recognized as invalid and thus ignored later. The contents of the spam.pyc file are platform independent, so a Python module directory can be shared by machines of different architectures.

Some tips for experts:

  • • When the Python interpreter is invoked with the -O flag, optimized code is generated and stored in .pyo files. The optimizer currently doesn't help much; it only removes assert statements and SET_LINENO instructions. When -O is used, all bytecode is optimized; .pyc files are ignored and .py files are compiled to optimized bytecode.
  • • Passing two -O flags to the Python interpreter (-OO) will cause the bytecode compiler to perform optimizations that could in some rare cases result in malfunctioning programs. Currently only _ _doc_ _ strings are removed from the bytecode, resulting in more compact .pyo files. Since some programs may rely on having these available, you should only use this option if you know what you're doing.
  • • A program doesn't run any faster when it is read from a .pyc or .pyo file than when it is read from a .py file; the only thing that's faster about .pyc or .pyo files is the speed with which they are loaded.
  • • When a script is run by giving its name on the command line, the bytecode for the script is never written to a .pyc or .pyo file. Thus, the startup time of a script may be reduced by moving most of its code to a module and having a small bootstrap script that imports that module. It is also possible to name a .pyc or .pyo file directly on the command line.
  • • It is possible to have a file called spam.pyc (or spam.pyo when -O is used) without a file spam.py for the same module. This can be used to distribute a library of Python code in a form that is moderately hard to reverse engineer.
  • • The module compileall  can create .pyc files (or .pyo files when -O is used) for all modules in a directory.
Standard Modules

Python comes with a library of standard modules, described in a separate document, the Python Library Reference (“Library Reference'' hereafter). Some modules are built into the interpreter; these provide access to operations that are not part of the core of the language but are nevertheless built in, either for efficiency or to provide access to operating system primitives such as system calls. The set of such modules is a configuration option which also depends on the underlying platform For example, the amoeba module is only provided on systems that somehow support Amoeba primitives. One particular module deserves some attention: sys , which is built into every Python interpreter. The variables sys.ps1 and sys.ps2 define the strings used as primary and secondary prompts:

 

>>> import sys

>>> sys.ps1

'>>> '

>>> sys.ps2

'... '

>>> sys.ps1 = 'C> '

C> print 'Yuck!'

Yuck!

C>

These two variables are only defined if the interpreter is in interactive mode.

The variable sys.path is a list of strings that determine the interpreter's search path for modules. It is initialized to a default path taken from the environment variable PYTHONPATH , or from a built-in default if PYTHONPATH is not set. You can modify it using standard list operations:

 

>>> import sys

>>> sys.path.append('/ufs/guido/lib/python')

 

dir( )

The built-in function dir( ) is used to find out which names a module defines. It returns a sorted list of strings:

 

>>> import fibo, sys

>>> dir(fibo)

['_ _name_ _', 'fib', 'fib2']

>>> dir(sys)

['_ _displayhook_ _', '_ _doc_ _', '_ _excepthook_ _', '_ _name_ _', '_ _stderr_ _',

'_ _stdin_ _', '_ _stdout_ _', '_getframe', 'argv', 'builtin_module_names',

'byteorder', 'copyright', 'displayhook', 'exc_info', 'exc_type',

'excepthook', 'exec_prefix', 'executable', 'exit', 'getdefaultencoding',

'getdlopenflags', 'getrecursionlimit', 'getrefcount', 'hexversion',

'maxint', 'maxunicode', 'modules', 'path', 'platform', 'prefix', 'ps1',

'ps2', 'setcheckinterval', 'setdlopenflags', 'setprofile',

'setrecursionlimit', 'settrace', 'stderr', 'stdin', 'stdout', 'version',

'version_info', 'warnoptions']

Without arguments, dir( ) lists the names you have defined currently:

 

>>> a = [1, 2, 3, 4, 5]

>>> import fibo, sys

>>> fib = fibo.fib

>>> dir( )

['_ _name_ _', 'a', 'fib', 'fibo', 'sys']

  • This argument lists all types of names: variables, modules, functions, etc.

dir( ) does not list the names of built-in functions and variables. If you want a list of those, they are defined in the standard module _ _builtin_ _  :

 

>>> import _ _builtin_ _

>>> dir(_ _builtin_ _)

['ArithmeticError', 'AssertionError', 'AttributeError',

'DeprecationWarning', 'EOFError', 'Ellipsis', 'EnvironmentError',

'Exception', 'FloatingPointError', 'IOError', 'ImportError',

'IndentationError', 'IndexError', 'KeyError', 'KeyboardInterrupt',

'LookupError', 'MemoryError', 'NameError', 'None', 'NotImplemented',

'NotImplementedError', 'OSError', 'OverflowError', 'OverflowWarning',

'ReferenceError', 'RuntimeError', 'RuntimeWarning', 'StandardError',

'StopIteration', 'SyntaxError', 'SyntaxWarning', 'SystemError',

'SystemExit', 'TabError', 'TypeError', 'UnboundLocalError',

'UnicodeError', 'UserWarning', 'ValueError', 'Warning',

'ZeroDivisionError', '_', '_ _debug_ _', '_ _doc_ _', '_ _import_ _',

'_ _name_ _', 'abs', 'apply', 'buffer', 'callable', 'chr', 'classmethod',

'cmp', 'coerce', 'compile', 'complex', 'copyright', 'credits', 'delattr',

'dict', 'dir', 'divmod', 'eval', 'execfile', 'exit', 'file', 'filter',

'float', 'getattr', 'globals', 'hasattr', 'hash', 'help', 'hex', 'id',

'input', 'int', 'intern', 'isinstance', 'issubclass', 'iter', 'len',

'license', 'list', 'locals', 'long', 'map', 'max', 'min', 'object',

'oct', 'open', 'ord', 'pow', 'property', 'quit', 'range', 'raw_input',

'reduce', 'reload', 'repr', 'round', 'setattr', 'slice', 'staticmethod',

'str', 'super', 'tuple', 'type', 'unichr', 'unicode', 'vars', 'xrange',

'zip']

 

Packages

Packages are a way of structuring Python's module namespace by using “dotted module names''. For example, the module name A.B designates a submodule named B in a package named A. Just like the use of modules saves the authors of different modules from having to worry about each other's global variable names, the use of dotted module names saves the authors of multi-module packages like NumPy or the Python Imaging Library from having to worry about each other's module names.

Suppose you want to design a collection of modules (a “package'') for the uniform handling of sound files and data. There are many different sound file formats (usually recognized by their extension, for example: .wav, .aiff, .au ), so you may need to create and maintain a growing collection of modules for the conversion between the various file formats. There are also many different operations you might want to perform on sound data (such as mixing, adding echo, applying an equalizer function, creating an artificial stereo effect), so in addition you will be writing a never-ending stream of modules to perform these operations. Here's a possible structure for your package (expressed in terms of a hierarchical filesystem):

 

Sound/ Top-level package

_ _init_ _.py Initialize the sound package

Formats/ Subpackage for file format conversions

_ _init_ _.py

wavread.py

wavwrite.py

aiffread.py

aiffwrite.py

auread.py

auwrite.py

...

Effects/ Subpackage for sound effects

_ _init_ _.py

echo.py

surround.py

reverse.py

...

Filters/ Subpackage for filters

_ _init_ _.py

equalizer.py

vocoder.py

karaoke.py

...

The _ _init_ _.py files are required to make Python treat the directories as containing packages; this is done to prevent directories with a common name, such as " string ", from unintentionally hiding valid modules that occur later on the module search path. In the simplest case, _ _init_ _.py can just be an empty file, but it can also execute initialization code for the package or set the _ _all_ _ variable, described later.

Users of the package can import individual modules from the package, for example:

 

import Sound.Effects.echo

This loads the submodule Sound.Effects.echo . It must be referenced with its full name.

 

Sound.Effects.echo.echofilter(input, output, delay=0.7, atten=4)

An alternative way of importing the submodule is:

 

from Sound.Effects import echo

This also loads the submodule echo , and makes it available without its package prefix, so it can be used as follows:

 

echo.echofilter(input, output, delay=0.7, atten=4)

Yet another variation is to import the desired function or variable directly:

from Sound.Effects.echo import echofilter

Again, this loads the submodule echo , but this makes its function echofilter( ) directly available:

 

echofilter(input, output, delay=0.7, atten=4)

Note that when using from package import item, the item can be either a submodule (or subpackage) of the package, or some other name defined in the package, like a function, class or variable. The import statement first tests whether the item is defined in the package; if not, it assumes it is a module and attempts to load it. If it fails to find it, an ImportError exception is raised.

Contrarily, when using syntax like import item.subitem.subsubitem, each item except for the last must be a package; the last item can be a module or a package but can't be a class or function or variable defined in the previous item.

Importing * From a Package

Now what happens when the user writes from Sound.Effects import *? Ideally, one would hope that this somehow goes out to the filesystem, finds which submodules are present in the package and imports them all. Unfortunately, this operation does not work very well on Mac and Windows platforms, where the filesystem does not always have accurate information about the case of a filename. On these platforms, there is no guaranteed way to know whether a file ECHO.PY should be imported as a module echo , Echo or ECHO . For example, Windows 95 has the annoying practice of showing all file names with a capitalized first letter. The DOS 8+3 filename restriction adds another interesting problem for long module names.

The only solution is for the package author to provide an explicit index of the package. The import statement uses the following convention: if a package's _ _init_ _.py code defines a list named _ _all_ _ , it is taken to be the list of module names that should be imported when from package import * is encountered. It is up to the package author to keep this list up-to-date when a new version of the package is released. Package authors may also decide not to support it, if they don't see a use for importing * from their package. For example, the file Sounds/Effects/_ _init_ _.py could contain the following code:

 

_ _all_ _ = ["echo", "surround", "reverse"]

This would mean that from Sound.Effects import * would import the three named submodules of the Sound package.

If _ _all_ _ is not defined, the statement from Sound.Effects import * does not import all submodules from the package Sound.Effects into the current namespace; it only ensures that the package Sound.Effects has been imported (possibly running its initialization code, _ _init_ _.py) and then imports whatever names are defined in the package. This includes any names defined (and submodules explicitly loaded) by _ _init_ _.py . It also includes any submodules of the package that were explicitly loaded by previous import statements. Consider this code:

 

import Sound.Effects.echo

import Sound.Effects.surround

from Sound.Effects import *

In this example, the echo and surround modules are imported in the current namespace because they are defined in the Sound.Effects package when the from...import statement is executed. This also works when _ _all_ _ is defined.

  • In general the practicing of importing * from a module or package is frowned upon, since it often causes poorly readable code. However, it is okay to use it to save typing in interactive sessions and certain modules are designed to export only names that follow certain patterns.

Remember, there is nothing wrong with using from Package import specific_submodule. In fact, this is the recommended notation unless the importing module needs to use submodules with the same name from different packages.

Intra-Package References

The submodules often need to refer to each other. For example, the surround module might use the echo module. In fact, such references are so common that the import statement first looks in the containing package before looking in the standard module search path. Thus, the surround module can simply use import echo or from echo import echofilter . If the imported module is not found in the current package (the package of which the current module is a submodule), the import statement looks for a top-level module with the given name.

When packages are structured into subpackages (as with the Sound package in the example), there's no shortcut to refer to submodules of sibling packages - the full name of the subpackage must be used. For example, if the module Sound.Filters.vocoder needs to use the echo module in the Sound.Effects package, it can use from Sound.Effects import echo .

Input and Output

There are several ways to present the output of a program; data can be printed in a human-readable form, or written to a file for future use.

Fancier Output Formatting

So far we've encountered two ways of writing values: expression statements and the print statement. A third way is using the write( ) method of file objects; the standard output file can be referenced as sys.stdout . See the Library Reference for more information on this.

Often you'll want more control over the formatting of your output than simply printing space-separated values. There are two ways to format your output; the first way is to do all the string handling yourself; using string slicing and concatenation operations you can create any lay-out you can imagine. The standard module string  contains some useful operations for padding strings to a given column width; these will be discussed shortly. The second way is to use the % operator with a string as the left argument. The % operator interprets the left argument much like a sprintf( ) -style format string to be applied to the right argument and returns the string resulting from this formatting operation.

One question remains, of course: how do you convert values to strings? Luckily, Python has ways to convert any value to a string: pass it to the repr( ) or str( ) functions, or just write the value between reverse quotes ( , equivalent to repr( ) ).

The str( ) function is meant to return representations of values which are fairly human-readable, while repr( ) is meant to generate representations which can be read by the interpreter (or will force a SyntaxError if there is not equivalent syntax). For objects which don't have a particular representation for human consumption, str( ) will return the same value as repr( ). Many values, such as numbers or structures like lists and dictionaries, have the same representation using either function. Strings and floating point numbers, in particular, have two distinct representations.

Some examples:

 

>>> s = 'Hello, world.'

>>> str(s)

'Hello, world.'

>>> `s`

"'Hello, world.'"

>>> str(0.1)

'0.1'

>>> `0.1`

'0.10000000000000001'

>>> x = 10 * 3.25

>>> y = 200 * 200

>>> s = 'The value of x is ' + `x` + ', and y is ' + `y` + '...'

>>> print s

The value of x is 32.5, and y is 40000...

>>> # Reverse quotes work on other types besides numbers:

... p = [x, y]

>>> ps = repr(p)

>>> ps

'[32.5, 40000]'

>>> # Converting a string adds string quotes and backslashes:

... hello = 'hello, world\n'

>>> hellos = `hello`

>>> print hellos

'hello, world\n'

>>> # The argument of reverse quotes may be a tuple:

... `x, y, ('spam', 'eggs')`

"(32.5, 40000, ('spam', 'eggs'))"

Here are two ways to write a table of squares and cubes:

 

>>> import string

>>> for x in range(1, 11):

... print string.rjust(`x`, 2), string.rjust(`x*x`, 3),

... # Note trailing comma on previous line

... print string.rjust(`x*x*x`, 4)

...

1 1 1

2 4 8

3 9 27

4 16 64

5 25 125

6 36 216

7 49 343

8 64 512

9 81 729

10 100 1000

>>> for x in range(1,11):

... print '%2d %3d %4d' % (x, x*x, x*x*x)

...

1 1 1

2 4 8

3 9 27

4 16 64

5 25 125

6 36 216

7 49 343

8 64 512

9 81 729

10 100 1000

  • One space between each column was added by the way print works: it always adds spaces between its arguments.)

This example demonstrates the function string.rjust( ) , which right-justifies a string in a field of a given width by padding it with spaces on the left. There are similar functions string.ljust( ) and string.center( ). These functions do not write anything, they just return a new string. If the input string is too long, they don't truncate it, but return it unchanged; this will mess up your column lay-out but that's usually better than the alternative, which would be lying about a value. If you really want truncation you can always add a slice operation, as in string.ljust(x, n)[0:n] .

There is another function, string.zfill( ) , which pads a numeric string on the left with zeros. It understands about plus and minus signs:

 

>>> import string

>>> string.zfill('12', 5)

'00012'

>>> string.zfill('-3.14', 7)

'-003.14'

>>> string.zfill('3.14159265359', 5)

'3.14159265359'

Using the % operator looks like this:

 

>>> import math

>>> print 'The value of PI is approximately %5.3f.' % math.pi

The value of PI is approximately 3.142.

If there is more than one format in the string, you need to pass a tuple as a right operand, as in this example:

 

>>> table = {'Sjoerd': 4127, 'Jack': 4098, 'Dcab': 7678}

>>> for name, phone in table.items( ):

... print '%-10s ==> %10d' % (name, phone)

...

Jack ==> 4098

Dcab ==> 7678

Sjoerd ==> 4127

Most formats work exactly as in C and require that you pass the proper type; however, if you don't, you get an exception, not a core dump. The %s format is more relaxed: if the corresponding argument is not a string object, it is converted to string using the str( ) built-in function. Using * to pass the width or precision in as a separate (integer) argument is supported. The C formats %n and %p are not supported.

If you have a really long format string that you don't want to split up, it would be nice if you could reference the variables to be formatted by name instead of by position. This can be done by using form %(name)format , as shown here:

 

>>> table = {'Sjoerd': 4127, 'Jack': 4098, 'Dcab': 8637678}

>>> print 'Jack: %(Jack)d; Sjoerd: %(Sjoerd)d; Dcab: %(Dcab)d' % table

Jack: 4098; Sjoerd: 4127; Dcab: 8637678

This is particularly useful in combination with the new built-in vars( ) function, which returns a dictionary containing all local variables.

Reading and Writing Files

open( )  returns a file object , and is most commonly used with two arguments:

 

" open( filename , mode ) ".

>>> f=open('/tmp/workfile', 'w')

>>> print f

<open file '/tmp/workfile', mode 'w' at 80a0960>

The first argument is a string containing the filename. The second argument is another string containing a few characters describing the way in which the file will be used. mode can be r when the file will only be read, w for only writing (an existing file with the same name will be erased), and a opens the file for appending; any data written to the file is automatically added to the end. r+ opens the file for both reading and writing. The mode argument is optional; r will be assumed if it's omitted.

On Windows and the Macintosh, b appended to the mode opens the file in binary mode, so there are also modes like rb , wb , and r+b. Windows makes a distinction between text and binary files; the end-of-line characters in text files are automatically altered slightly when data is read or written. This behind-the-scenes modification to file data is fine for ASCII text files, but it'll corrupt binary data like that in JPEGs or .EXE files. Be very careful when using binary mode when reading and writing such files.

  • The precise semantics of text mode on the Macintosh depends on the underlying C library being used.
Methods of File Objects

The rest of the examples in this section will assume that a file object called f has already been created.

To read a file's contents, call f.read(size), which reads some quantity of data and returns it as a string. Size is an optional numeric argument. When size is omitted or negative, the entire contents of the file will be read and returned; it's your problem if the file is twice as large as your machine's memory. Otherwise, at most size bytes are read and returned. If the end of the file has been reached, f.read( ) will return an empty string ( " " ).

 

>>> f.read( )

'This is the entire file.\n'

>>> f.read( )

''

f.readline( ) reads a single line from the file; a newline character ( \n) is left at the end of the string and is only omitted on the last line of the file if the file doesn't end in a newline. This makes the return value unambiguous; if f.readline( ) returns an empty string, the end of the file has been reached, while a blank line is represented by ' \n ' , a string containing only a single newline.

 

>>> f.readline( )

'This is the first line of the file.\n'

>>> f.readline( )

'Second line of the file\n'

>>> f.readline( )

''

f.readlines( ) returns a list containing all the lines of data in the file. If given an optional parameter sizehint, it reads that many bytes from the file and more to complete a line and returns the lines from that. This is often used to allow efficient reading of a large file by lines, but without having to load the entire file in memory. Only complete lines will be returned.

 

>>> f.readlines( )

['This is the first line of the file.\n', 'Second line of the file\n']

f.write(string) writes the contents of string to the file, returning None .

 

>>> f.write('This is a test\n')

f.tell( ) returns an integer giving the file object's current position in the file, measured in bytes from the beginning of the file. To change the file object's position, use f.seek(offset, from_what) . The position is computed from adding offset to a reference point; the reference point is selected by the from_what argument. A from_what value of 0 measures from the beginning of the file, 1 uses the current file position and 2 uses the end of the file as the reference point. from_what can be omitted and defaults to 0, using the beginning of the file as the reference point.

 

>>> f=open('/tmp/workfile', 'r+')

>>> f.write('0123456789abcdef')

>>> f.seek(5) # Go to the 6th byte in the file

>>> f.read(1)

'5'

>>> f.seek(-3, 2) # Go to the 3rd byte before the end

>>> f.read(1)

'd'

When you're done with a file, call it f.close( ) to close it and free up any system resources taken up by the open file. After calling f.close( ) , attempts to use the file object will automatically fail.

 

>>> f.close( )

>>> f.read( )

Traceback (most recent call last):

File "<stdin>", line 1, in ?

ValueError: I/O operation on closed file

File objects have some additional methods, such as isatty( ) and t runcate( ) which are less frequently used; consult the Library Reference for a complete guide to file objects.

The pickle Module

Strings can easily be written to and read from a file. Numbers take a bit more effort, since the read( ) method only returns strings, which will have to be passed to a function like string.atoi( ) , which takes a string like 123 and returns its numeric value 123. However, when you want to save more complex data types like lists, dictionaries, or class instances, things get a lot more complicated.

Rather than have users be constantly writing and debugging code to save complicated data types, Python provides a standard module called pickle . This is an amazing module that can take almost any Python object (even some forms of Python code) and convert it to a string representation; this process is called pickling. Reconstructing the object from the string representation is called unpickling. Between pickling and unpickling, the string representing the object may have been stored in a file or data, or sent over a network connection to some distant machine.

If you have an object x , and a file object f that's been opened for writing, the simplest way to pickle the object takes only one line of code:

 

pickle.dump(x, f)

To unpickle the object again, if f is a file object which has been opened for reading:

 

x = pickle.load(f)

(There are other variants of this, used when pickling many objects or when you don't want to write the pickled data to a file; consult the complete documentation for pickle in the Library Reference).

pickle is the standard way to make Python objects which can be stored and reused by other programs or by a future invocation of the same program; the technical term for this is a persistent object. Because pickle is so widely used, many authors who write Python extensions take care to ensure that new data types such as matrices can be properly pickled and unpickled.

Errors and Exceptions

Until now error messages haven't been more than mentioned, but if you have tried out the examples you have probably seen some. There are (at least) two distinguishable kinds of errors: syntax errors and exceptions.

Syntax Errors

Syntax errors, also known as parsing errors, are perhaps the most common kind of complaint you get while you are still learning Python:

 

>>> while 1 print 'Hello world'

File "<stdin>", line 1, in ?

while 1 print 'Hello world'

^

SyntaxError: invalid syntax

The parser repeats the offending line and displays a little arrow pointing at the earliest point in the line where the error was detected. The error is caused by (or at least detected at) the token preceding the arrow: in the example, the error is detected at the keyword print , since a colon ( : ) is missing before it. File name and line number are printed so you know where to look in case the input came from a script.

Exceptions

Even if a statement or expression is syntactically correct, it may cause an error when an attempt is made to execute it. Errors detected during execution are called exceptions and are not unconditionally fatal: you will soon learn how to handle them in Python programs. Most exceptions are not handled by programs, however, and result in error messages as shown here:

 

>>> 10 * (1/0)

Traceback (most recent call last):

File "<stdin>", line 1, in ?

ZeroDivisionError: integer division or modulo

>>> 4 + spam*3

Traceback (most recent call last):

File "<stdin>", line 1, in ?

NameError: spam

>>> '2' + 2

Traceback (most recent call last):

File "<stdin>", line 1, in ?

TypeError: illegal argument type for built-in operation

The last line of the error message indicates what happened. Exceptions come in different types, and the type is printed as part of the message: the types in the example are ZeroDivisionError, NameError and TypeError . The string printed as the exception type is the name of the built-in name for the exception that occurred. This is true for all built-in exceptions, but need not be true for user-defined exceptions (although it is a useful convention). Standard exception names are built-in identifiers (not reserved keywords).

The rest of the line is a detail whose interpretation depends on the exception type; its meaning is dependent on the exception type.

The preceding part of the error message shows the context where the exception happened, in the form of a stack backtrace. In general it contains a stack backtrace listing source lines; however, it will not display lines read from standard input.

The Python Library Reference lists the built-in exceptions and their meanings.

Handling Exceptions

It is possible to write programs that handle selected exceptions. Look at the following example, which asks the user for input until a valid integer has been entered, but allows the user to interrupt the program (using Control-C or whatever the operating system supports); note that a user-generated interruption is signalled by raising the KeyboardInterrupt exception.

 

>>> while 1:

... try:

... x = int(raw_input("Please enter a number: "))

... break

... except ValueError:

... print "Oops! That was no valid number. Try again..."

...

The try statement works as follows.

  • • First, the try clause (the statement(s) between the try and except keywords) is executed.
  • • If no exception occurs, the except clause is skipped and execution of the try statement is finished.
  • • If an exception occurs during execution of the try clause, the rest of the clause is skipped. Then if its type matches the exception named after the except keyword, the rest of the try clause is skipped, the except clause is executed and then execution continues after the try statement.
  • • If an exception occurs which does not match the exception named in the except clause, it is passed on to outer try statements; if no handler is found, it is an unhandled exception and execution stops with a message as shown above.

A try statement may have more than one except clause, to specify handlers for different exceptions. At most one handler will be executed. Handlers only handle exceptions that occur in the corresponding try clause, not in other handlers of the same try statement. An except clause may name multiple exceptions as a parenthesized list, for example:

 

... except (RuntimeError, TypeError, NameError):

... pass

The last except clause may omit the exception name(s), to serve as a wildcard. Use this with extreme caution, since it is easy to mask a real programming error this way. It can also be used to print an error message and then re-raise the exception (allowing a caller to handle the exception as well):

 

import string, sys

try:

f = open('myfile.txt')

s = f.readline( )

i = int(string.strip(s))

except IOError, (errno, strerror):

print "I/O error(%s): %s" % (errno, strerror)

except ValueError:

print "Could not convert data to an integer."

except:

print "Unexpected error:", sys.exc_info( )[0]

raise

The try ... except statement has an optional else clause, which, when present, must follow all except clauses. It is useful for code that must be executed if the try clause does not raise an exception. For example:

 

for arg in sys.argv[1:]:

try:

f = open(arg, 'r')

except IOError:

print 'cannot open', arg

else:

print arg, 'has', len(f.readlines( )), 'lines'

f.close( )

The use of the else clause is better than adding additional code to the try clause because it avoids accidentally catching an exception that wasn't raised by the code being protected by the try ... except statement.

When an exception occurs, it may have an associated value, also known as the exception's argument. The presence and type of the argument depend on the exception type. For exception types which have an argument, the except clause may specify a variable after the exception name (or list) to receive the argument's value, as follows:

 

>>> try:

... spam( )

... except NameError, x:

... print 'name', x, 'undefined'

...

name spam undefined

If an exception has an argument, it is printed as the last part ( detail ) of the message for unhandled exceptions.

Exception handlers don't just handle exceptions if they occur immediately in the try clause, but also if they occur inside functions that are called (even indirectly) in the try clause. For example:

 

>>> def this_fails( ):

... x = 1/0

...

>>> try:

... this_fails( )

... except ZeroDivisionError, detail:

... print 'Handling run-time error:', detail

...

Handling run-time error: integer division or modulo

 

Raising Exceptions

The raise statement allows the programmer to force a specified exception to occur. For example:

 

>>> raise NameError, 'HiThere'

Traceback (most recent call last):

File "<stdin>", line 1, in ?

NameError: HiThere

The first argument to raise names the exception to be raised. The optional second argument specifies the exception's argument.

If you need to determine whether an exception was raised but don't intend to handle it, a simpler form of the raise statement allows you to re-raise the exception:

 

>>> try:

... raise NameError, 'HiThere'

... except NameError:

... print 'An exception flew by!'

... raise

...

An exception flew by!

Traceback (most recent call last):

File "<stdin>", line 2, in ?

NameError: HiThere

 

User-defined Exceptions

Programs may name their own exceptions by creating a new exception class. Exceptions should typically be derived from the Exception class, either directly or indirectly. For example:

 

>>> class MyError(Exception):

... def _ _init_ _(self, value):

... self.value = value

... def _ _str_ _(self):

... return `self.value`

...

>>> try:

... raise MyError(2*2)

... except MyError, e:

... print 'My exception occurred, value:', e.value

...

My exception occurred, value: 4

>>> raise MyError, 'oops!'

Traceback (most recent call last):

File "<stdin>", line 1, in ?

_ _main_ _.MyError: 'oops!'

Exception classes can be defined which do anything any other class can do, but are usually kept simple, often only offering a number of attributes that allow information about the error to be extracted by handlers for the exception. When creating a module which can raise several distinct errors, a common practice is to create a base class for exceptions defined by that module and subclass to create specific exception classes for different error conditions:

 

class Error(Exception):

"""Base class for exceptions in this module."""

pass

 

class InputError(Error):

"""Exception raised for errors in the input.

Attributes:

expression -- input expression in which the error occurred

message -- explanation of the error

"""

def _ _init_ _(self, expression, message):

self.expression = expression

self.message = message

class TransitionError(Error):

"""Raised when an operation attempts a state transition that's not

allowed.

Attributes:

previous -- state at beginning of transition

next -- attempted new state

message -- explanation of why the specific transition is not allowed

"""

def _ _init_ _(self, previous, next, message):

self.previous = previous

self.next = next

self.message = message

Most exceptions are defined with names that end in Error similar to the naming of the standard exceptions.

Many standard modules define their own exceptions to report errors that may occur in functions they define.

Defining Clean-Up Actions

The try statement has another optional clause which is intended to define clean-up actions that must be executed under all circumstances. For example:

 

>>> try:

... raise KeyboardInterrupt

... finally:

... print 'Goodbye, world!'

...

Goodbye, world!

Traceback (most recent call last):

File "<stdin>", line 2, in ?

KeyboardInterrupt

A finally clause is executed whether or not an exception has occurred in the try clause. When an exception has occurred, it is re-raised after the finally clause is executed. The finally clause is also executed “on the way out'' when the try statement is left via a break or return statement.

The code in the finally clause is useful for releasing external resources (such as files or network connections), regardless of whether or not the use of the resource was successful.

A try statement must either have one or more except clauses or one finally clause, but not both.

Classes

Python's class mechanism adds classes to the language with a minimum of new syntax and semantics. It is a mixture of the class mechanisms found in C++ and Modula-3. As is true for modules, classes in Python do not put an absolute barrier between definition and user, but rather rely on the politeness of the user not to “break into the definition”. The most important features of classes are retained with full power, however: the class inheritance mechanism allows multiple base classes, a derived class can override any methods of its base class or classes, a method can call the method of a base class with the same name. Objects can contain an arbitrary amount of private data.

In C++ terminology, all class members (including the data members) are public and all member functions are virtual. There are no special constructors or destructors. As in Modula-3, there are no shorthands for referencing the object's members from its methods: the method function is declared with an explicit first argument representing the object, which is provided implicitly by the call. As in Smalltalk, classes themselves are objects, albeit in the wider sense of the word: in Python, all data types are objects. This provides semantics for importing and renaming. But, just like in C++ or Modula-3, built-in types cannot be used as base classes for extension by the user. Also, like in C++ but unlike in Modula-3, most built-in operators with special syntax (arithmetic operators, subscripting etc.) can be redefined for class instances.

Terminology

Lacking universally accepted terminology to talk about classes, I will make occasional use of Smalltalk and C++ terms (I would use Modula-3 terms, since its object-oriented semantics are closer to those of Python than C++, but I expect that few readers have heard of it).

I also have to warn you that there's a terminological pitfall for object-oriented readers: the word “object” in Python does not necessarily mean a class instance. Like C++ and Modula-3 and unlike Smalltalk, not all types in Python are classes: the basic built-in types like integers and lists are not and even somewhat more exotic types like files aren't. However, all Python types share a little bit of common semantics that is best described by using the word object.

Objects have individuality and multiple names (in multiple scopes) can be bound to the same object. This is known as aliasing in other languages. This is usually not appreciated on a first glance at Python and can be safely ignored when dealing with immutable basic types (numbers, strings, tuples). However, aliasing has an (intended) effect on the semantics of Python code involving mutable objects such as lists, dictionaries and most types representing entities outside the program (files, windows, etc.). This is usually used to the benefit of the program, since aliases behave like pointers in some respects. For example, passing an object is cheap since only a pointer is passed by the implementation; and if a function modifies an object passed as an argument, the caller will see the change -- this obviates the need for two different argument passing mechanisms as in Pascal.

Python Scopes and Name Spaces

Before introducing classes, I first have to tell you something about Python's scope rules. Class definitions play some neat tricks with namespaces and you need to know how scopes and namespaces work to fully understand what's going on. Incidentally, knowledge about this subject is useful for any advanced Python programmer.

Definitions.

A namespace is a mapping from names to objects. Most namespaces are currently implemented as Python dictionaries, but that's normally not noticeable in any way (except for performance) and it may change in the future. Examples of namespaces are: the set of built-in names (functions such as abs( ) , and built-in exception names); the global names in a module; and the local names in a function invocation. In a sense the set of attributes of an object also form a namespace. The important thing to know about namespaces is that there is absolutely no relation between names in different namespaces; for instance, two different modules may both define a function “maximize” without confusion -- users of the modules must prefix it with the module name.

By the way, I use the word attribute for any name following a dot -- for example, in the expression z.real, real is an attribute of the object z . Strictly speaking, references to names in modules are attribute references: in the expression modname.funcname , modname is a module object and funcname is an attribute of it. In this case there happens to be a straightforward mapping between the module's attributes and the global names defined in the module: they share the same namespace.

Attributes may be read-only or writable. In the latter case, assignment to attributes is possible. Module attributes are writable: you can write modname.the_answer = 42 . Writable attributes may also be deleted with the del statement. For example, del modname.the_answer will remove the attribute the_answer from the object named by modname .

Name spaces are created at different moments and have different lifetimes. The namespace containing the built-in names is created when the Python interpreter starts up and is never deleted. The global namespace for a module is created when the module definition is read in; normally, module namespaces also last until the interpreter quits. The statements executed by the top-level invocation of the interpreter, either read from a script file or interactively, are considered part of a module called _ _main_ _ , so they have their own global namespace (the built-in names actually also live in a module; this is called _ _builtin_ _ ).

The local namespace for a function is created when the function is called and deleted when the function returns or raises an exception that is not handled within the function. Of course, recursive invocations each have their own local namespace.

A scope is a textual region of a Python program where a namespace is directly accessible. “Directly accessible” here means that an unqualified reference to a name attempts to find the name in the namespace.

Although scopes are determined statically, they are used dynamically. At any time during execution, exactly three nested scopes are in use (exactly three namespaces are directly accessible): the innermost scope, which is searched first, contains the local names, the middle scope, searched next, contains the current module's global names and the outermost scope (searched last) is the namespace containing built-in names.

Usually, the local scope references the local names of the (textually) current function. Outside of functions, the local scope references the same namespace as the global scope: the module's namespace. Class definitions place yet another namespace in the local scope.

It is important to realize that scopes are determined textually: the global scope of a function defined in a module is that module's namespace, no matter from where or by what alias the function is called. On the other hand, the actual search for names is done dynamically, at run time -- however, the language definition is evolving towards static name resolution, at “compile” time, so don't rely on dynamic name resolution (local variables are already determined statically).

A special quirk of Python is that assignments always go into the innermost scope. Assignments do not copy data -- they just bind names to objects. The same is true for deletions: the statement del x removes the binding of x from the namespace referenced by the local scope. In fact, all operations that introduce new names use the local scope: in particular, import statements and function definitions bind the module or function name in the local scope (the global statement can be used to indicate that particular variables live in the global scope).

First Look at Classes

Classes introduce a little bit of new syntax, three new object types and some new semantics.

Class Definition Syntax

The simplest form of class definition looks like this:

 

class ClassName:

<statement-1>

.

.

.

<statement-N>

Class definitions, like function definitions ( def statements) must be executed before they have any effect (a class definition can be placed in a branch of an if statement, or inside a function).

In practice, the statements inside a class definition will usually be function definitions, but other statements are allowed and sometimes useful. The function definitions inside a class normally have a peculiar form of argument list, dictated by the calling conventions for methods.

When a class definition is entered, a new namespace is created and used as the local scope -- thus, all assignments to local variables go into this new namespace. In particular, function definitions bind the name of the new function here.

When a class definition is left normally (via the end), a class object is created. This is basically a wrapper around the contents of the namespace created by the class definition; we'll learn more about class objects in the next section. The original local scope (the one in effect just before the class definitions was entered) is reinstated, and the class object is bound here to the class name given in the class definition header ( ClassName in the example).

Class Objects

Class objects support two kinds of operations: attribute references and instantiation.

Attribute references use the standard syntax used for all attribute references in Python: obj.name. Valid attribute names are all the names that were in the class's namespace when the class object was created. So, if the class definition looked like this:

 

class MyClass:

"A simple example class"

i = 12345

def f(self):

return 'hello world'

then MyClass.i and MyClass.f are valid attribute references, returning an integer and a method object, respectively. Class attributes can also be assigned to, so you can change the value of MyClass.i by assignment. _ _doc_ _ is also a valid attribute, returning the docstring belonging to the class: A simple example class ).

Class instantiation uses function notation, where the class object is a parameterless function that returns a new instance of the class. For example (assuming the above class):

 

x = MyClass( )

creates a new instance of the class and assigns this object to the local variable x .

The instantiation operation (“calling” a class object) creates an empty object. Many classes like to create objects in a known initial state. Therefore a class may define a special method named _ _init_ _( ) , like this:

def _ _init_ _(self):

self.data = []

When a class defines an _ _init_ _( ) method, class instantiation automatically invokes _ _init_ _( ) for the newly-created class instance. So in this example, a new, initialized instance can be obtained by:

 

x = MyClass( )

Of course, the _ _init_ _( ) method may have arguments for greater flexibility. In that case, arguments given to the class instantiation operator are passed on to _ _init_ _( ) . For example,

 

>>> class Complex:

... def _ _init_ _(self, realpart, imagpart):

... self.r = realpart

... self.i = imagpart

...

>>> x = Complex(3.0, -4.5)

>>> x.r, x.i

(3.0, -4.5)

 

Instance Objects

The only operations understood by instance objects are attribute references. There are two kinds of valid attribute names.

The first I'll call data attributes. These correspond to “instance variables” in Smalltalk, and to “data members” in C++. Data attributes need not be declared; like local variables, they spring into existence when they are first assigned. For example, if x is the instance of MyClass created above, the following piece of code will print the value 16 , without leaving a trace:

 

x.counter = 1

while x.counter < 10:

x.counter = x.counter * 2

print x.counter

del x.counter

The second kind of attribute references understood by instance objects are methods. A method is a function that “belongs to” an object, (which in Python, the term method is not unique to class instances: other object types can have methods as well). For example, list objects have methods called append, insert, remove, sort and so on. However, below, we'll use the term method exclusively to mean methods of class instance objects, unless explicitly stated otherwise.

Valid method names of an instance object depend on its class. By definition, all attributes of a class that are (user-defined) function objects define corresponding methods of its instances. So in our example, x.f is a valid method reference, since MyClass.f is a function, but x.i is not, since MyClass.i is not. But x.f is not the same thing as MyClass.f -- it is a  method object, not a function object.

Method Objects

Usually, a method is called immediately:

 

x.f( )

In our example, this will return the string hello world . However, it is not necessary to call a method right away: x.f is a method object and can be stored away and called at a later time. For example:

 

xf = x.f

while 1:

print xf( )

will continue to print hello world until the end of time.

When a method is called, x.f( ) was called without an argument above, even though the function definition for f specified an argument. Surely Python raises an exception when a function that requires an argument is called without any -- even if the argument isn't actually used.

Actually, you may have guessed the answer: the special thing about methods is that the object is passed as the first argument of the function. In our example, the call x.f( ) is exactly equivalent to MyClass.f(x) . In general, calling a method with a list of n arguments is equivalent to calling the corresponding function with an argument list that is created by inserting the method's object before the first argument.

If you still don't understand how methods work, a look at the implementation can perhaps clarify matters. When an instance attribute is referenced that isn't a data attribute, its class is searched. If the name denotes a valid class attribute that is a function object, a method object is created by packing (pointers to) the instance object and the function object just found together in an abstract object: this is the method object. When the method object is called with an argument list, it is unpacked again, a new argument list is constructed from the instance object and the original argument list and the function object is called with this new argument list.

Random Remarks

Data attributes override method attributes with the same name; to avoid accidental name conflicts, which may cause hard-to-find bugs in large programs, it is wise to use some kind of convention that minimizes the chance of conflicts. Possible conventions include capitalizing method names, prefixing data attribute names with a small unique string (perhaps just an underscore), or using verbs for methods and nouns for data attributes.

Data attributes may be referenced by methods as well as by ordinary users (“clients”) of an object. In other words, classes are not usable to implement pure abstract data types. In fact, nothing in Python makes it possible to enforce data hiding -- it is all based upon convention. However, the Python implementation, written in C, can completely hide implementation details and control access to an object if necessary; this can be used by extensions to Python written in C.

Clients should use data attributes with care -- clients may mess up invariants maintained by the methods by stamping on their data attributes.

  • Clients may add data attributes of their own to an instance object without affecting the validity of the methods, as long as name conflicts are avoided.

There is no shorthand for referencing data attributes (or other methods) from within methods. I find that this actually increases the readability of methods: there is no chance of confusing local variables and instance variables when glancing through a method.

Conventionally, the first argument of methods is often called self . This is nothing more than a convention: the name self has absolutely no special meaning to Python.

  • However, by not following the convention your code may be less readable by other Python programmers, and it is also conceivable that a class browser program be written which relies upon such a convention.

Any function object that is a class attribute defines a method for instances of that class. It is not necessary that the function definition is textually enclosed in the class definition: assigning a function object to a local variable in the class is also ok. For example:

 

# Function defined outside the class

def f1(self, x, y):

return min(x, x+y)

class C:

f = f1

def g(self):

return 'hello world'

h = g

Now f, g and h are all attributes of class C that refer to function objects and consequently they are all methods of instances of C -- h being exactly equivalent to g .This practice usually only serves to confuse the reader of a program.

Methods may call other methods by using method attributes of the self argument:

 

class Bag:

def _ _init_ _(self):

self.data = []

def add(self, x):

self.data.append(x)

def addtwice(self, x):

self.add(x)

self.add(x)

Methods may reference global names in the same way as ordinary functions. The global scope associated with a method is the module containing the class definition. The class itself is never used as a global scope. While one rarely encounters a good reason for using global data in a method, there are many legitimate uses of the global scope: for one thing, functions and modules imported into the global scope can be used by methods, as well as functions and classes defined in it. Usually, the class containing the method is itself defined in this global scope.

Inheritance

Of course, a language feature would not be worthy of the name “class” without supporting inheritance. The syntax for a derived class definition looks as follows:

 

class DerivedClassName(BaseClassName):

<statement-1>

.

.

.

<statement-N>

The name BaseClassName must be defined in a scope containing the derived class definition. Instead of a base class name, an expression is also allowed. This is useful when the base class is defined in another module,

 

class DerivedClassName(modname.BaseClassName):

Execution of a derived class definition proceeds the same as for a base class. When the class object is constructed, the base class is remembered. This is used for resolving attribute references: if a requested attribute is not found in the class, it is searched in the base class. This rule is applied recursively if the base class itself is derived from some other class.

There's nothing special about instantiation of derived classes: DerivedClassName( ) creates a new instance of the class. Method references are resolved as follows: the corresponding class attribute is searched, descending down the chain of base classes if necessary and the method reference is valid if this yields a function object.

Derived classes may override methods of their base classes. Because methods have no special privileges when calling other methods of the same object, a method of a base class that calls another method defined in the same base class, may in fact end up calling a method of a derived class that overrides it (for C++ programmers: all methods in Python are effectively virtual ).

An overriding method in a derived class may in fact want to extend rather than simply replace the base class method of the same name. There is a simple way to call the base class method directly: just call BaseClassName.methodname(self, arguments) . This is occasionally useful to clients as well, if the base class is defined or imported directly in the global scope.

Multiple Inheritance

Python supports a limited form of multiple inheritance as well. A class definition with multiple base classes looks as follows:

 

class DerivedClassName(Base1, Base2, Base3):

<statement-1>

.

.

.

<statement-N>

The only rule necessary to explain the semantics is the resolution rule used for class attribute references. This is depth-first, left-to-right. Thus, if an attribute is not found in DerivedClassName , it is searched in Base1 , then (recursively) in the base classes of Base1 , and only if it is not found there, it is searched in Base2 and so on.

To some breadth first -- searching Base2 and Base3 before the base classes of Base1 -- looks more natural. However, this would require knowing whether a particular attribute of Base1 is actually defined in Base1 or in one of its base classes before you can figure out the consequences of a name conflict with an attribute of Base2 . The depth-first rule makes no differences between direct and inherited attributes of Base1 .

It is clear that indiscriminate use of multiple inheritance is a maintenance nightmare, given the reliance in Python on conventions to avoid accidental name conflicts. A well-known problem with multiple inheritance is a class derived from two classes that happen to have a common base class. While it is easy enough to figure out what happens in this case (the instance will have a single copy of “instance variables” or data attributes used by the common base class), it is not clear that these semantics are in any way useful.

Private Variables

There is limited support for class-private identifiers. Any identifier of the form _ _spam (at least two leading underscores, at most one trailing underscore) is now textually replaced with _classname_ _spam , where classname is the current class name with leading underscore(s) stripped. This mangling is done without regard of the syntactic position of the identifier, so it can be used to define class-private instance and class variables, methods, as well as globals and even to store instance variables private to this class on instances of other classes. Truncation may occur when the mangled name would be longer than 255 characters. Outside classes, or when the class name consists of only underscores, no mangling occurs.

Name mangling is intended to give classes an easy way to define “private” instance variables and methods, without having to worry about instance variables defined by derived classes, or mucking with instance variables by code outside the class.

  • The mangling rules are designed mostly to avoid accidents; it still is possible for a determined soul to access or modify a variable that is considered private. This can even be useful in special circumstances, such as in the debugger and that's one reason why this loophole is not closed (buglet: derivation of a class with the same name as the base class makes use of private variables of the base class possible).

Notice that code passed to exec, eval( ) or evalfile( ) does not consider the classname of the invoking class to be the current class; this is similar to the effect of the global statement, the effect of which is likewise restricted to code that is byte-compiled together. The same restriction applies to getattr( ), setattr( ) and delattr( ) , as well as when referencing _ _dict_ _ directly.

Here's an example of a class that implements its own _ _getattr_ _( ) and _ _setattr_ _( ) methods and stores all attributes in a private variable, in a way that works in all versions of Python, including those available before this feature was added:

 

class VirtualAttributes:

_ _vdict = None

_ _vdict_name = locals( ).keys( )[0]

def _ _init_ _(self):

self._ _dict_ _[self._ _vdict_name] = {}

def _ _getattr_ _(self, name):

return self._ _vdict[name]

def _ _setattr_ _(self, name, value):

self._ _vdict[name] = value

 

Odds and Ends

Sometimes it is useful to have a data type similar to the Pascal “record” or C “struct”, bundling together a couple of named data items. An empty class definition will do nicely:

 

class Employee:

pass

john = Employee( ) # Create an empty employee record

# Fill the fields of the record

john.name = 'John Doe'

john.dept = 'computer lab'

john.salary = 1000

A piece of Python code that expects a particular abstract data type can often be passed a class that emulates the methods of that data type instead. For instance, if you have a function that formats some data from a file object, you can define a class with methods read( ) and readline( ) that gets the data from a string buffer instead and pass it as an argument.

Instance method objects have attributes like m.im_self is the object of which the method is an instance, and m.im_func is the function object corresponding to the method.

Exceptions Can Be Classes

User-defined exceptions are no longer limited to being string objects -- they can be identified by classes as well. Using this mechanism it is possible to create extensible hierarchies of exceptions.

There are two new valid (semantic) forms for the raise statement:

 

raise Class, instance

raise instance

In the first form, instance must be an instance of Class or of a class derived from it. The second form is a shorthand for:

 

raise instance._ _class_ _, instance

An except clause may list classes as well as string objects. A class in an except clause is compatible with an exception if it is the same class or a base class thereof (but not the other way around -- an except clause listing a derived class is not compatible with a base class). For example, the following code will print B, C, D in that order:

 

class B:

pass

class C(B):

pass

class D(C):

pass

for c in [B, C, D]:

try:

raise c( )

except D:

print "D"

except C:

print "C"

except B:

print "B"

  • If the except clauses were reversed (with except B first), it would have printed B, B, B -- the first matching except clause is triggered.

When an error message is printed for an unhandled exception which is a class, the class name is printed, then a colon and a space and finally the instance converted to a string using the built-in function str( ) .

Floating Point Numbers

Floating-point numbers are represented in computer hardware as base 2 (binary) fractions. For example, the decimal fraction

 

0.125

has value 1 / 10 + 2 / 100 + 5 / 1000, and in the same way the binary fraction

 

0.001

has value 0 / 2 + 0 / 4 + 1 / 8. These two fractions have identical values, the only real difference being that the first is written in base 10 fractional notation and the second in base 2.

Unfortunately, most decimal fractions cannot be represented exactly as binary fractions. A consequence is that, in general, the decimal floating-point numbers you enter are only approximated by the binary floating-point numbers actually stored in the machine.

The problem is easier to understand at first in base 10. Consider the fraction 1 / 3. You can approximate that as a base 10 fraction:

 

0.3

or, better,

 

0.33

or, better,

 

0.333

and so on. No matter how many digits you're willing to write down, the result will never be exactly 1 / 3, but will be an increasingly better approximation to 1 / 3.

In the same way, no matter how many base 2 digits you're willing to use, the decimal value 0.1 cannot be represented exactly as a base 2 fraction. In base 2, 1 / 10 is the infinitely repeating fraction

 

0.0001100110011001100110011001100110011001100110011...

Stop at any finite number of bits, and you get an approximation. This is why you see things like:

 

>>> 0.1

0.10000000000000001

On most machines today, that is what you'll see if you enter 0.1 at a Python prompt. You may not, though, because the number of bits used by the hardware to store floating-point values can vary across machines and Python only prints a decimal approximation to the true decimal value of the binary approximation stored by the machine. On most machines, if Python were to print the true decimal value of the binary approximation stored for 0.1, it would have to display instead.

 

>>> 0.1

0.1000000000000000055511151231257827021181583404541015625

The Python prompt (implicitly) uses the builtin repr( ) function to obtain a string version of everything it displays. For floats, repr(float) rounds the true decimal value to 17 significant digits, giving

 

0.10000000000000001

repr(float) produces 17 significant digits because it turns out that's enough (on most machines) so that eval(repr(x)) == x exactly for all finite floats x , but rounding to 16 digits is not enough to make that true.

  • This is in the very nature of binary floating-point: this is not a bug in Python, it is not a bug in your code either and you'll see the same kind of thing in all languages that support your hardware's floating-point arithmetic (although some languages may not display the difference by default, or in all output modes).

Python's builtin str( ) function produces only 12 significant digits, and you may wish to use that instead. It's unusual for eval(str(x)) to reproduce x , but the output may be more pleasant to look at:

 

>>> print str(0.1)

0.1

It's important to realize that this is, in a real sense, an illusion: the value in the machine is not exactly 1 / 10, you're simply rounding the display of the true machine value.

Other surprises follow from this one. For example, after seeing

 

>>> 0.1

0.10000000000000001

you may be tempted to use the round( ) function to chop it back to the single digit you expect. But that makes no difference:

 

>>> round(0.1, 1)

0.10000000000000001

The problem is that the binary floating-point value stored for “0.1” was already the best possible binary approximation to 1 / 10, so trying to round it again can't make it better: it was already as good as it gets.

Another consequence is that since 0.1 is not exactly 1 / 10, adding 0.1 to itself 10 times may not yield exactly 1.0, either:

 

>>> sum = 0.0

>>> for i in range(10):

... sum += 0.1

...

>>> sum

0.99999999999999989

Binary floating-point arithmetic holds many surprises like this. The problem with "0.1" is explained in precise detail below, in the "Representation Error" section.

As that says near the end, “there are no easy answers”. Still, don't be unduly wary of floating-point. The errors in Python float operations are inherited from the floating-point hardware and on most machines are on the order of no more than 1 part in 2**53 per operation. That's more than adequate for most tasks, but you do need to keep in mind that it's not decimal arithmetic and that every float operation can suffer a new rounding error.

While pathological cases do exist, for most casual uses of floating-point arithmetic you'll see the result you expect in the end if you simply round the display of your final results to the number of decimal digits you expect. str( ) usually suffices, and for finer control see the discussion of Pythons's % format operator: the %g, %f and %e format codes supply flexible and easy ways to round float results for display.

Representation Error

Th “0.1” example will be detailed for an exact analysis of cases, where a basic familiarity with binary floating-point representation is assumed.

Representation error refers to that some (most, actually) decimal fractions cannot be represented exactly as binary (base 2) fractions. This is the chief reason why Python (or Perl, C, C++, JAVA, Fortran and many others) often won't display the exact decimal number you expect:

 

>>> 0.1

0.10000000000000001

1 / 10 is not exactly representable as a binary fraction. Almost all machines today use IEEE-754 floating point arithmetic and almost all platforms map Python floats to IEEE-754 “double precision”. 754 doubles contain 53 bits of precision, so on input the computer strives to convert 0.1 to the closest fraction it can of the form J/2**N where J is an integer containing exactly 53 bits. Rewriting:

 

1 / 10 ~= J / (2**N)

as

 

J ~= 2**N / 10

and recalling that J has exactly 53 bits (is >= 2**52 but < 2**53 ), the best value for N is 56 :

 

>>> 2L**52

4503599627370496L

>>> 2L**53

9007199254740992L

>>> 2L**56/10

7205759403792793L

That is, 56 is the only value for N that leaves J with exactly 53 bits. The best possible value for J is then that quotient rounded:

 

>>> q, r = divmod(2L**56, 10)

>>> r

6L

Since the remainder is more than half of 10, the best approximation is obtained by rounding up:

 

>>> q+1

7205759403792794L

Therefore the best possible approximation to 1 / 10 in 754 double precision is that over 2**56 , or

 

7205759403792794 / 72057594037927936

  • Since we rounded up, this is actually a little bit larger than 1 / 10; if we had not rounded up, the quotient would have been a little bit smaller than 1 / 10. But in no case can it be exactly 1 / 10.

So the computer never “sees” 1 / 10: what it sees is the exact fraction given above, the best 754 double approximation it can get:

 

>>> .1 * 2L**56

7205759403792794.0

If we multiply that fraction by 10**30 , we can see the (truncated) value of its 30 most significant decimal digits:

 

>>> 7205759403792794L * 10L**30 / 2L**56

100000000000000005551115123125L

meaning that the exact number stored in the computer is approximately equal to the decimal value 0.100000000000000005551115123125 . Rounding that to 17 significant digits gives the 0.10000000000000001 that Python displays (well, will display on any 754-conforming platform that does best-possible input and output conversions in its C library).

Working With Scripting Languages

TestMaker includes Wizards and Recorders to facilitate creating tests. While these are powerful tools in their own right there is only so much one can do with them. For instance, suppose you need to write a test that analyzes the results of a server response according to some complex business logic and takes subsequent action based on the results. At moments like these we recommend you use TestMaker's dynamic scripting capabilities.

Writing A Test Script

Test scripts running in the TestMaker environment have several advantages:

  • • Test scripts are programs. They have input parameters, output values, access to objects and frameworks and may be version controlled like any other software program.
  • TestMaker supports a variety of languages, including JAVA, Jython, Groovy, PHP, Ruby and many others. TestMaker does so by using the ScriptEngine (JSR 223) that appeared in JAVA 6.
  • • TestMaker provides the Test Object Oriented Library (TOOL) to enable the test scripts you write with abilities to speak native SOA, soap, Web, telephony and email protocols.
  • • TestMaker provides a base library of functions (called Agentbase) to make it easier to write a Web-oriented test using HTTP GET and POST commands and to emulate typical browser functions such as image caching.
  • TestMaker runs test scripts locally for functional and unit testing and turns test scripts into load tests, regression tests, service monitors and more using the TestScenario system and TestNodes.

This tutorial will show a series of examples to demonstrate the TestMaker advantages. We will show writing a test script in the Jython language. Jython is the Python language implemented entirely in JAVA. Our intent will be to make the concepts presented in these tutorials applicable to all of the supported scripting languages, including Groovy, Ruby, PHP, JAVAScript and many others.

Running A Test Script

Please follow these steps to create and run your first script on the local TestMaker console.

  • Open TestMaker and click the Create A New Test button in the QuickStart Helper or click the New Test icon in the icon bar.
  • • Click the Generic Jython TestCase Script button.
  • • Use the Save file selector to choose a name and path for the new script. Since this is a Jython script please choose a file name that ends in .py .
  • • A skeleton of a test script opens in the editor panel. The skeleton implements a JUnit TestCase class, the most basic of unit tests of a service or application. TestCase classes have a setUp , runTest and tearDown method.
  • • Using the editor change the skeleton to be as follows.

 

'''

Agent name: Examples1.py

Created on: July 18, 2007

Created by: Frank Cohen, PushToTest

 

For details on TestMaker see http://www.pushtotest.com

'''

from junit.framework import TestCase

import sys, re, time

from java.lang import Exception

class example1:

 

    def _ _init_ _( self ):

        ''' Initialize the test case '''

        print "Example1 here: Initializing"

 

    def setUp( self ):

        ''' Add any needed set-up code here. '''

        pass

 

    def runTest( self ):

        ''' Run the test '''

        pass

 

    def tearDown( self ):

        ''' Add any needed code to end the test here. '''

        pass

 

'''

Convenience main method for running this test by itself

otherwise, plug this into XSTest to turn it into a scalability

and load test, and the Service Monitor System (SMS) for

a Quality of Service (QOS) monitor.

 

'''

if _ _name_ _ == 'main':

    print '======================================================='

    print 'example 1: Functional test'

    print '======================================================='

    print 'Test created by TestMaker from http://www.pushtotest.com'

    print

    print

 

    test = example1( )

    test.setUp( )

    test.runTest( )

    test.tearDown( )

    print "done"

  • • Please make a change to the skeleton as highlighted above. This prints an ... Initializing... message when the script operates and the example1 class runs. The changed skeleton comes with TestMaker in TestMaker_home/example_agents/scriptTutorial/example1.py
  • • Run this script by choosing the Run Agent command in the Agents drop-down menu.
  • • Results of the script appear in the Output Panel

 

=======================================================

example 1: Functional test

=======================================================

Test created by TestMaker from http://www.pushtotest.com

 

Example1 here: Initializing

 

done

In this tutorial we created a script that implements a jUnit TestCase class. We modified the skeleton script slightly and then learned how to run the script locally to your TestMaker computer. In the next tutorial we will learn how to use the functions of the Agentbase library to build a Web test script.

Using Agentbase In A Web Test Script

Agentbase provides a base set of functions to interact with a Web site over HTTP protocols.Agentbase is found in TestMaker_home/lib/agentbase.py .

 

This tutorial will show how to operate the HTTP_Example.py script found in TestMaker_home/example_agents/HTTP_Example.py .

  • • From TestMaker open the HTTP_Example.py script in the TestMaker/example_agents directory. Use the navigation panel (to the left of the editor panel) to view the example_agents directory. Double click HTTP_Example.py . The script appears in the Editor Panel.
  • • There are some facets of the HTTP_Example script that we should point out first.
  • • Notice that the definition for the HTTP_Example class identifies the JUnit TestCase and AgentBase libraries. This is the way Jython uses inheritence from abstract classes and frameworks. Without having to write any additional code the HTTP_Example class inherits the methods defined in the Agentbase module class.
  • • Read the _ _init_ _ method. Notice that this method takes several input values. Look at the bottom of this script to see details on the input values. All of these input values are optional in that the script will use default values if none are sent when the class instantiates.
  • • Read the setUp method. Notice that this method makes a call to the self.config( ) method. config( ) is provided in the Agentbase library to establish internal values and parameters to work with a Web site in a test defined in HTTP_Example .
  • • Read the runTest method. Notice the use of the get and post methods to interact with Web applications. Also notice the self.params value being set before several get and post method calls. self.params passes parameters to the target host.
  • • Run this script by choosing the Run Agent command in the Agents drop-down menu.
  • • Observe the results in the Output Panel.

Agentbase takes much of the coding out of a test script by providing several immediately useful methods. It also provides emulation of a Web browser cache and checks Web references to images and other Web resources. For instance, while processing a get command Agentbase also checks that all the images referenced in the received Web page actually make it to the browser. Here is a summary of the user selectable parameters for a script that uses Agentbase.

  • •     log level = 0- no logging, 1- informational messages, 2- detailed messages, 3- Everything
  • •     log destination

       console - to the screen,

       file - to a file,

       response - to the log file in xml form with the host response,

  • •     database - to a JDBC datasource
  • •     follow_redirects = 0 - do not follow HTTP 302 response codes, 1 - follow them automatically
  • •     successcodes = Regular Expression (regex) defining HTTP success response codes
  • •     logpath = path and file name to store log file
  • •     sleeptime_min = minimum amount of time in seconds to sleep between requests to the host
  • •     sleeptime_max = maximum amount of time to sleep between requests to the host
  • •     imagesleeptim e = time in seconds to sleep between requests for <img> tag references
  • •     loadimgtags = 1 - load <img> tag references, 0 - skip them
  • •     imagecache = 1 - emulate a browser cache of <img> tag references, 0 - load all <img> tags
  • •     logfirst = 1 - add <testmaker> element to head new log files, 0 - ignore

Agentbase also provides cookie handling for state management. In the HTTP_Example class all of the get and post requests will share a single HTTP protocol handler and store and send cookies for statement management.

Agentbase provides setProxy( ) method to operate the test in environments using a proxy server.

Agentbase provides a base set of functions to interact with a Web site over HTTP protocols. In the next tutorial we will show how to use the TestGen4Web add-on to Firefox to jump start a Web-oriented test script.

Jump Start A Web Test Script

There are two additional ways to jump start a Web test script: Use the TestGen4Web add-on to the FireFox Web browser and use the Recorder.

Transform A TestGen4Web Test Into A Jython Test Script

TestMaker comes with the TestGen4Web add-on to the FireFox Web browser to record use of a Web application to create a unit test of the Web application. TestGen4Web creates a test in an XML file format that TestMaker operates using a langtype of TestGen4Web. For instance:

 

<run name="MyTest" testclass="PTT_Examples_UnitTest.testgen4web" method="testGen4Web" langtype="testgen4web"/>

The test runner in TestMaker that operates a TestGen4Web test plays-back the recorded test. To add more advanced functions to the test we recommend you create a Jython script from the recorded TestGen4Web test file.

  1. 1. Select the Tools->Transform TestGen4Web Test command.
  2. 2. A file dialog appears. Choose the TestGen4Web file .
  3. 3. A save dialog appears. Choose the name and path of the new Jython script.
  4. 4. The new Jython script appears in the script editor panel.

The new Jython script implements a class that operates the test just as the TestGen4Web XML file format document would but now you have all the richness of the Jython scripting language to customize and make the test more advanced.

Use The Recorder To Write a Jython Test Script

The Recorder is an alternative technique to using TestGen4Web. The Recorder watches you drive a Web-enabled application using your browser and writes a test agent script for you. You may play-back the test agent script in TestMaker or from the command-line to perform a functional (unit) test. Recorded tests may be used in a TestScenario to conduct scalability, performance and service montoring.

The Recorder is built around a smart proxy server that watches for HTTP traffic between your browser and the service. The proxy decodes HTTP GET and POST commands from the browser and the responses from the server. The proxy then writes the Jython and TOOL script commands necessary to replay your use of the Web-enabled application. This design supports HTTP 1.0 and 1.1 compliant browsers, including the use of JAVAScript, plug-ins, ActiveX objects and JAVA applets.

The Agent Recorder does not support HTTPS connections. By the time the TestMaker proxy receives the request from the browser, the request is encrypted and TestMaker is not able to decode the body of the HTTP protocol. The only workaround at this time is to temporarily host your application with SSL encryption turned-off while you record the test and then turn on SSL encryption when you play-back the test.

Before using the Recorder you will need to configure your browser to communicate through the Recorder proxy. By default, TestMaker sets the proxy port to 8090. Your browser preferences are usually the place to set the used proxy port to be used. Each browser is different at controlling the proxy server. For example, in Microsoft Internet Explorer 6.0 for Windows 2000, the proxy server settings are found by choosing the Tools -> Internet Options -> Connections Tab -> LAN Settings button. The lower portion of the dialog that appears controls the Proxy server settings.

If port 8090 is already in use on your system then change the TestMaker proxy server to use a different port number. In TestMaker choose the Help -> Preferences -> Recorder Tab .

With the proxy settings configured, every request from the browser will go through TestMaker's proxy server. This has a side effect in that TestMaker needs to be running for you to use your browser.

Next we will Record a test of the Web-enabled application hosted at examples.pushtotest.com. The application is a simple servlet that responds with Web pages. Depending on the parameters sent to the servlet the response can be a Web page containing random content, responses to HTTP Post commands with HTML forms and provides a Web site with HTML links our recorded test will follow.

Follow these steps:

  1. 1. Configure your browser to use Proxy port 8090 .
  2. 2. Start TestMaker.
  3. 3. Choose File -> New -> HTML Agent Recorder or click the Recorder button.
  4. 4. TestMaker displays a dialog asking for the name of the new test agent. Type MyFirstTest .
  1. 5. Click the Start Recording button.
  2. 6. Point your browser to this URL: http://examples.pushtotest.com/responder/htmlresponder
  1. 7. Click the link for file2.html . You should find the link about halfway down the Web page.
  2. 8. The browser will display a new Web page that contains two forms. In the top-most form, enter your first and last name , and your phone number . Then enter 1081 as the account number and 75.36 in the amount field. Click the Transfer Funds button.
  3. 9. Your browser will show a new Web page that echoes the HTML form information you submitted on the previous page.
  4. 10. Return to TestMaker and click the Stop Recording button.
  5. 11. TestMaker will display a standard file selector dialog. Navigate to the directory you wish to store the new agent, enter a file name for the agent , and click the Save button.
  6. 12. The new test agent script will appear in the Editor Panel in TestMaker.

To play-back the new agent script, click the Run icon in the Execution panel. Alternatively you may choose the Run command in the Agent drop-down menu.

The agent will replay the steps you took while using the browser to drive the Web-enabled application on the examples.pushtotest.com domain. You will see the results of the play-back in the Output Panel.

Advanced Options

The Recorder includes several advanced options to make the recorded scripts act in real-world manner. The advanced options include the ability to load image references embedded in a retrieved Web page, follow 302-redirect responses from the Web host and much more. The Recorder's advanced options are viewed by clicking the Advanced Options button from the Recorder window. Click the Advanced Options button to expand the window to see the following interface:

 

The Recorder offers the following advanced options:

Log To

As TestMaker runs a test agent script it outputs logged informational and results information. This controls the destination of the logged data.

  • Output Panel - Displays logged data into the output panel of the TestMaker graphic interface (the lower right corner).
  • File - Writes logged data to the file defined in Log path / name .
  • File with Response - Writes the logged data in XML format to the file defined in Log path / name . The XML file format is defined in testmaker_home/docs/results.xsd . The file format saves the request header and body and the response header and body.
  • Database - Writes the logged data to a relational database using your provided JDBC driver. This requires you to edit testmaker_home/lib/agentbase.py to configure the JDBC driver.
Log Path / Name

Destination file name and path to write the logged data TestMaker generates while running a test agent. The Browse button displays a standard file selector to graphically navigate the file system and directories.

Log Level

Controls the amount of information logged by the test agent, including:

  • 0 - No logging .
  • 1 - Major operations. This logs information messages only. For example, the test agent logs informational notices on when the test has finished setting-up for a test.
  • 2 - Details . This instructs the test agent to display the contents of an HTTP request and response.
  • 3 - Debug. The test agent displays all information and detailed information, plus additional details that may be useful while debugging test agent scripts.
Sleep Time

This setting controls the minimum and maximum amount of time, in seconds, to pause between Web page requests. You set the minimum and maximum values and the test agent chooses an amount of time between the two values to pause.

Success Responses

This is a Regular Express (regex) defining the HTTP response codes that indicate the request was accepted by the Web host. By default this regex considers HTTP response codes of 200-299, and 300-304, and others to be successful responses. HTTP response codes of 500, for example, are not part of the regex and so the test agent throws an exception.

Follow HTTP 302 Redirects

Web hosts may return a 302 response code to direct a Web browser to load another page. Checking this field tells the test agent at runtime to follow 302 reponse codes and load them automatically.

Load <IMG> Tag References

When checked the recorded test agent will automatically parse a retrieved Web page for <img> image tag references and make HTTP requests load the images. If File with response is checked then the test agent will store the URL, time it takes to load the image and image size in bytes for each image in the Web page.

TestMaker uses the TagSoup library to parse HTML pages for <img> tag references. Instead of parsing well-formed or valid XML, TagSoup parses HTML as it is found in the wild: nasty and brutish world of the Web. It makes a best effort to parse a Web page. The test agent will show errors in the logged data when they occur.

Load <IMG> Tag References

This determines the number of seconds the test agent will pause before loading the next <img> tag reference.

Emulate Browser Image Caching

Web browsers usually store downloaded images in a local cache. When checked this instructs the test agent to avoid loading the same <img> tag references more than once.

When Is It Appropriate To Use TestGen4Web and the Recorder?

You may be wondering when it is appropriate to use the TestGen4Web add-on to Firefox or the Recorder? Here is a summary of the advantages of each approach.

t

 

TestGen4Web

Recorder

Record secure Web pages using HTTPS protocols

Yes

No

Transforms recorded test into Jython script

Yes

Yes

Transform script underlying framework

HTML Unit

TOOL

Edit and reorganize recorded steps in graphical interface

Yes

No

Emulate browser functions (browser image caching, testing of image tag validity, log levels)

No

Yes

Using TOOL Protocol Handlers In A Script

The nature of XML, Service Oriented Architecture (SOA,) Web Services, Databases and Applications means we need to be prepared for new and inventive protocols to interoperate with services and applications.

TestMaker comes with several example test agent scripts to show you immediately useful examples to show you many of the other facets of writing test scripts using the TOOL and Agentbase libraries. For instance, TestMaker_home/example_agents/soap_Message_Example.py is a Jython script that uses the soap protocol handler in TOOL to interoperate with a Web Service. The following are excerpts from this script.

 

from com.pushtotest.tool.protocolhandler import ProtocolHandler, soapProtocol, soapBody, soapHeader

from com.pushtotest.tool.response import Response

   . . .

 

self.protocol = ProtocolHandler.getProtocol("soap")

self.body = soapBody( )

self.protocol.setBody(self.body)

self.protocol.setUrl("http://examples.pushtotest.com/axis/services/MessageService")

   . . .

 

self.doc="<hello>Jack</hello>"

   . . .

 

self.body.setDocument( self.doc )

self.response = self.protocol.connect( )

   . . .

The above excerpts create a new soap protocol handler, create and attach a new soap body, set the URL for the target service, set the message body ( self.doc ) document and then connects to the service. The response variable holds the response object and has its own set of APIs to analyze and understand the result.

TOOL is an extensible protocol handler library. For instance, one TestMaker users needed a SIP protocol handler to test an Astrix telephony application. He spent less than a day to add a SIP protocol handler to TOOL. See the developer's guide for more information.

Using Scripts in TestScenarios

TestMaker's TestScenario facilitty turns any of these scripts into a functional test, load test and service monitor. TestScenario documents identify the test scripts to operate during a test. For instance, the following are portions of a TestScenario that operate a test written in a JAVA method:

 

. . .

<resources>

    <module name="myjar" path="myclasses.jar"/>

</resources>

. . .

<run name="test1" testclass="com.pushtotest.example.mytest" method="runtest" langtype="java">

. . .

The <resources> identifies the JAR file containing my testclass. The <run> instantiates a mytest object and calls the runtest method. Here is what the mytest class looks like.

 

package com.pushtotest.example.mytest;

 

import org.junit.TestCase;

 

public class Console implements TestCase{

 

public void setUp( )

{

   . . .

}

 

public void runTest( )

 

{

    System.out.println("Running the test.");

    . . .

}

public void tearDown( )

{

   . . .

}

 

}

The langtype attribute in <run> identifies the language type to be used. TestMaker supports these language types:

t

Language Name

Langtype

JAVA

java

Jython (the JAVA implementation of the popular Python language)

jython

JRuby (the JAVA implementation of the Ruby language)

jruby

Groovy

groovy

Rhino (the open-source implementation of the JAVAScript language

rhino

The <resources> section of a TestScenario identifies the test script. The following is an example using JRuby:

 

<resources>

  <module name="rubyModule" path="./example_agents/JRubyExample/example.rb"/>

</resources>

. . .

 

<run module="rubyModule" name="calc" testclass="Calculator" method="average_of_3" langtype="jruby"/>

...

 

and for Jython use the following:

 

<resources>

  <module name="myJythonScript" path="./example_agents/HTTP_Example.py"/>

</resources>

. . .

 

<run module="myJythonScript" name="test1" testclass="TestClass" method="runTest" langtype="jython"/>

...

 

See the Example Agents for examples of running JRuby, Jython and other script language test scripts.

JSR 223 and Class Instances

TestMaker uses the JAVA 6 and greater JSR 223 ScriptEngine to operate tests written in the supported languages. The ScriptEngine is not able to instantiate a class. It may call a method or invoke a method in a class already instantiated. To do that, it needs a method that returns the object. To accomodate the ScriptEngine test scripts must include an “entry point” method that returns an instance of the test class referenced in the <run> <setup> or <teardown> methods of a test. TestMaker follows a convention that the entry point method is of the form get<classname>( ) . The following shows an instance of this for a Jython script.

 

class DPLExample( agentbase.agentbase,  junit.framework.TestCase ):

    . . .

 

    def runTest( self, dpl_provided_argument_value ):

        ''' Run the test '''

        self.log( 1, "test: runTest" )

        self.params = [ [ '''testinput''', dpl_provided_argument_value ] ]

        self.get( '''http://examples.pushtotest.com/responder/htmlresponder''', self.params)

        . . .

 

def getDPLExample( ):

       '''

       Returns an instance of this object for the JSR 223 ScriptEngine in TestMaker

       '''

       return DPLExample( )

eMail Protocol Handler

The Mail protocol handler enables test agents to check SMTP, POP3 and IMAP services for scalability and functionality.

The new Mail protocol handler object provides a simple interface to send and receive email messages in test agent scripts. See the Mail_Agent.a for examples of the Mail protocol handler in action.

Send a Simple Email Message

Simple email messages contain address information and a text encoded message. The address information includes the host name and the to and from addresses. The body of the message is simply a text string. Here is an example TestMaker script to illustrate how to send a simple email message:

 

from com.pushtotest.tool.protocolhandler import ProtocolHandler, MailBody

 

protocol = ProtocolHandler.getProtocol("mail")

protocol.setHost("mail.pushtotest.com")

body = MailBody( protocol.getSession( ) )

protocol.setBody(body)

 

body.setFrom( " This e-mail address is being protected from spambots. You need JavaScript enabled to view it " )

body.addAddress( " This e-mail address is being protected from spambots. You need JavaScript enabled to view it " )

body.setSubject( "Hey Buddy!" )

body.setText( "Time to go skiing." )

 

protocol.send( )

Next we look at the functions used in the script.

 

from com.pushtotest.tool.protocolhandler import ProtocolHandler, MailBody

The import command tells the Jython scripting language where to find the ProtocolHandler and MailBody objects.

 

protocol = ProtocolHandler.getProtocol("mail")

This creates a new instance of the MailProtoco l object that we will reference in the protocol variable.

 

protocol.setHost("mail.pushtotest.com")

This tells the MailProtocol object where to find the email host. PushToTest created an email account on mail.pushtotest.com to enable you to run the test agent for real. Please do not abuse this email account.

 

body = MailBody( protocol.getSession( ) )

protocol.setBody(body)

A new MailBody object will be used to define the parameters of the email message. In typical object-oriented programming fashion test agent scripts could maintain a number of MailBody objects depending on the message to be sent.

 

body.setFrom( " This e-mail address is being protected from spambots. You need JavaScript enabled to view it " )

body.addAddress( " This e-mail address is being protected from spambots. You need JavaScript enabled to view it " )

body.setSubject( "Hey Buddy!" )

body.setText( "Time to go skiing." )

These commands set the parameters of the email message body..

protocol.send( )

Lastly, the send method sends the email message to the host. The send method will throw an exception when the message is unsuccessful. The exception will include text describing the problem, including badly formed From and To addresses, bad host name, or bad host connectivity.  

Receive and Delete Email Messages

The MailProtocol handler includes functions for reading an email host. Here is an example TestMaker script to illustrate how to receive simple email messages:

 

from com.pushtotest.tool.protocolhandler import ProtocolHandler, MailBody

from javax.mail import Folder

 

protocol = ProtocolHandler.getProtocol("mail")

protocol.setHost("fries.pushtotest.com")

protocol.setUsername("buddy")

protocol.setPassword( "email" )

 

response = protocol.connect( )

 

response.setPermission( Folder.READ_ONLY )

 

#response.setPermission( Folder.READ_WRITE )

 

response.setFolder( "INBOX" )

 

messages = response.getMessages( )

 

for i in messages:

print i.getFrom( )[0], i.getSubject( )

i.setFlag( Flag.DELETED, 1 )

 

response.close( 1 )

print "done"

Next we look at the functions used in the script.

 

from com.pushtotest.tool.protocolhandler import ProtocolHandler, MailBody

from javax.mail import Folder

The import command tells the Jython scripting language where to find the ProtocolHandler and MailBody objects. Additionally the Folder object will be used in this agent script to address email message commands such as read and delete on the mail host.

 

protocol = ProtocolHandler.getProtocol("mail")

This creates a new instance of the MailProtocol object that we will reference in the protocol variable.

 

protocol.setHost("fries.pushtotest.com")

protocol.setUsername("buddy")

protocol.setPassword( "email" )

These commands tell the MailProtocol object where to find the email host and which account to use. PushToTest created the buddy email account on mail.pushtotest.com to enable you to run the test agent for real. Please do not abuse this email account.

 

response = protocol.connect( )

The connect method returns a response object that opens a session with the mail host. The response object provides the methods needed to read and delete email messages. By default the connection to the mail host uses the POP3 protocol. TestMaker includes IMAP support. To change protocols use the protocol.setProtocol(imap) method before using the connect( ) method.

 

response.setPermission( Folder.READ_ONLY )

The setPermission method instructs the mail host on our test agent's intentions. Folder.READ_ONLY by default tells the mail host we will be reading email messages but not deleting them from the host. Folder.READ_WRITE tells the mail host to expect message delete commands.

 

response.setFolder( "INBOX" )

The POP3 protocol uses a single folder titled INBOX by default. The IMAP protocol supports multiple folders.

 

messages = response.getMessages( )

The getMessages( ) method receives a list of messages from the host.

 

for i in messages:

print i.getFrom( )[0], i.getSubject( )

print i.writeTo( )

The for loop iterates through each message. The getFrom, getSubject and writeTo methods return the from, subject and message body.

i.setFlag( Flag.DELETED, 1 )

The setFlag method enables the agent script to mark the email message for deletion from the mail host. The flag is set in this loop but does not actually happen until the close( ) method executes. The flag has no effect if the setPermission method is not set to Folder.READ_WRITE .

 

response.close( 1 )

Closes the connection to the host. The close method takes a single boolean parameter. When the parameter is set to True or 1 then the close command instructs the mail host to delete any flagged messages. A False or 0 parameter tells the mail host to ignore the deleted messages.  

Send an Email Message with File Attachments

Email messages use MIME encoding to attach files, icons and binary data to a message body. The MailProtocol handler supports MIME encoding too. The agent script below illustrates how to send an email message with a file attachment:

 

from com.pushtotest.tool.protocolhandler import ProtocolHandler, MailBody

from javax.activation import DataSource, FileDataSource, DataHandler

 

protocol = ProtocolHandler.getProtocol("mail")

protocol.setHost("fries.pushtotest.com")

body = MailBody( protocol.getSession( ) )

protocol.setBody(body)

 

body.setFrom( " This e-mail address is being protected from spambots. You need JavaScript enabled to view it " )

body.addAddress( " This e-mail address is being protected from spambots. You need JavaScript enabled to view it " )

body.setSubject( "Hey Buddy!" )

body.setText( "Time to go skiing." )

 

part = body.getNewBodyPart( )

part.setText("Big idea")

 

fn = "c:\\wiki_man.gif"

source = FileDataSource( fn )

part.setDataHandler( DataHandler( source ) )

part.setFileName ( source.getName( ) )

 

part = body.getNewBodyPart( )

part.setText("Big ski time")

 

fn = "C:\\FileExchange\\Photo Library\\2001\\2001-02-23 Sagogn\\DVC00008.JPG"

source = FileDataSource( fn )

part.setDataHandler( DataHandler( source ) )

part.setFileName ( source.getName( ) )

 

protocol.send( )

 

print "message sent."

 

Next we look at the functions used in the script.

 

from com.pushtotest.tool.protocolhandler import ProtocolHandler, MailBody

from javax.activation import DataSource, FileDataSource, DataHandler

The import command tells the Jython scripting language where to find the ProtocolHandler and MailBody objects. Additionally the DataSource , FileDataSource and DataHandler objects help to construct a multipart MIME encoded message body.

 

protocol = ProtocolHandler.getProtocol("mail")

As in the previous example we use the getProtocol( ) method to create a new MailProtocol handler that we will reference using the protocol variable.

 

protocol.setHost("fries.pushtotest.com")

The setHost( ) method defines the domain of the mail host.

 

body = MailBody( protocol.getSession( ) )

protocol.setBody(body)

Next we create a MailBody object and link it to the MailProtocol handler object. This enables us to have multiple MailBody objects to put to use.

 

body.setFrom( " This e-mail address is being protected from spambots. You need JavaScript enabled to view it " )

body.addAddress( " This e-mail address is being protected from spambots. You need JavaScript enabled to view it " )

The setFrom( ) and addAddress( ) methods enable us to address the email message. The addresses must be SMTP email addresses. addAddress( ) may be used to address the single message to as many people as we wish.

 

body.setSubject( "Hey Buddy!" )

body.setText( "Time to go skiing." )

Next we define the subject line and body text of the email message. The setText method sets the simple body of the message. Most email clients will automatically show the simple body message text and the attached files in-line. Now we're ready to attach 2 files to this message body.

 

part = body.getNewBodyPart( )

part.setText("Big idea")

We create a new MIME body part to hold the first file. This file is a simple GIF image residing at the C:\ directory. Any file or even in-memory datastream, could be used instead of this particular file.

 

fn = "c:\\wiki_man.gif"

source = FileDataSource( fn )

part.setDataHandler( DataHandler( source ) )

part.setFileName ( source.getName( ) )

The setDataHandler( ) method identifies the input source of the file contents for when the actual email message is composed. If an email client cannot display the attached file then it displays the file name set by setFileName( ) . That's all there is to attaching a file. When the agent needs to attach another file to the same message the agent script gets another new body part.

 

part = body.getNewBodyPart( )

part.setText("Big ski time")

 

fn = "C:\\FileExchange\\Photo Library\\2001\\2001-02-23 Sagogn\\DVC00008.JPG"

source = FileDataSource( fn )

part.setDataHandler( DataHandler( source ) )

part.setFileName ( source.getName( ) )

The second file is titled Big ski time and is sent to the mail host using the same

attachment procedure.

protocol.send( )

The last step is to tell the MailProtocol handler to send the email message. The handler packages all necessary information and instantly sends the email message to the email server.  

Receive an Email with File Attachment

Previously we showed how to construct a test agent script to receive a simple text message. Email messages use MIME encoding to attach files, icons and binary data to a message body. The Mail protocol handler supports MIME decoding too. The agent script below illustrates how to receive an email message with file attachments. Getting attachments from an email message is more involved then sending them. MIME has no simple notation of attachments. That is mostly because MIME came after SMTP and POP3 and the IETF lost its way trying to further these email protocols while the Internet revolution was taking off. First we will show the agent script in its entirety, then we will describe how each function operates.

 

from com.pushtotest.tool.protocolhandler import ProtocolHandler, MailBody

from javax.mail import Folder, Multipart, Part

from java.io import File, FileInputStream, FileOutputStream

 

protocol = ProtocolHandler.getProtocol("mail")

protocol.setHost("fries.pushtotest.com")

protocol.setUsername("buddy")

protocol.setPassword( "email" )

 

response = protocol.connect( )

 

response.setPermission( Folder.READ_ONLY )

 

#response.setPermission( Folder.READ_WRITE )

response.setFolder( "INBOX" )

 

messages = response.getMessages( )

 

counter = 1

 

for msg in messages:

print msg.getFrom( )[0], msg.getSubject( )

#msg.setFlag( Flag.DELETED, 1 )

 

if msg.isMimeType( "multipart/*" ):

mp = msg.getContent( )

for i in range( mp.getCount( ) ):

part = mp.getBodyPart( i )

 

disp = part.getDisposition( )

 

if ( disp != None ) and ( ( disp == Part.ATTACHMENT ) or ( disp == Part.INLINE ) ):

inp = part.getInputStream( )

out = File( "c:\\myfile" + str( counter ) )

fos = FileOutputStream( out )

more = 1

while more:

c = inp.read( )

if c == -1:

more = 0

else:

fos.write(c)

 

inp.close( )

fos.close( )

 

print counter

counter+=1

else:

mp = msg.getContent( )

print "mp=",mp

 

response.close( 0 )

 

print "done"

This agent script opens a connection to the mail host and reads each of the waiting email messages from a special email account hosted on the PushToTest service. PushToTest created an email account on mail.pushtotest.com to enable you to run the test agent real time. Please do not abuse this email account. Next we will look at the functions used in the script:

 

from com.pushtotest.tool.protocolhandler import ProtocolHandler, MailBody

from javax.mail import Folder, Multipart, Part

from java.io import File, FileInputStream, FileOutputStream

The import command tells the Jython scripting language where to find the ProtocolHandler and MailBody objects. Additionally the Folder, Multipart, Part, File, FileInputStream and FileOutputStream objects help to find the MIME encoded file attachments in email messages.

 

protocol = ProtocolHandler.getProtocol("mail")

As in the previous example we use the getProtocol( ) method to create a new MailProtocol handler that we will reference using the protocol variable. By default this handler will use the POP3 protocol to receive email messages from the email host.

 

protocol.setHost("fries.pushtotest.com")

protocol.setUsername("buddy")

protocol.setPassword( "email" )

The setHost( ) method defines the domain of the mail host. We also define the email account name and password to check for email.

 

response = protocol.connect( )

The connect( ) method initiates a connection and establishes a session with the email host. The connect method returns a response object that opens a session with the mail host. The response object provides the methods needed to read and delete email messages.

 

response.setPermission( Folder.READ_ONLY )

By default the connection to the email host is Read-Only. That is email messages may be read from the host but not deleted from the host. To read and delete email messages use the Folder.READ_WRITE permission.

 

response.setFolder( "INBOX" )

The POP3 protocol uses a single folder titled INBOX by default. The IMAP protocol supports multiple folders.

 

messages = response.getMessages( )

The getMessages( ) method receives a list of waiting messages from the host.

 

counter = 1

 

for msg in messages:

We look through all the waiting messages, handling each one at a time.

print msg.getFrom( )[0], msg.getSubject( )

The getFrom and getSubject return the from and subject of the message.

#msg.setFlag( Flag.DELETED, 1 )

Once the message is read we may want to delete it from the email host. To do so we set a flag that this message is to be deleted. Only when we close the current connection and session do the messages become deleted.

if msg.isMimeType( "multipart/*" ):

First we check to make sure the message contains a MIME encoded message body. If it is not MIME encoded then we should expect that it is a simple text message and handle it accordingly.

mp = msg.getContent( )

for i in range( mp.getCount( ) ):

The MIME encoded message may contain one or more attachments. We loop through each attachment in the current message.

part = mp.getBodyPart( i )

 

disp = part.getDisposition( )

 

if ( disp != None ) and ( ( disp == Part.ATTACHMENT ) or ( disp == Part.INLINE ) ):

Each part of the MIME encoded message body is marked with a disposition. Parts marked with a disposition of Part.ATTACHMENT are clearly attachments. However, attachments can also come across with no disposition and a non-text MIME type or a disposition of Part.INLINE . When the disposition is either Part.ATTACHMENT or Part.INLINE you can save off the content for that message part by getting an input stream from the part's contents.

 

inp = part.getInputStream( )

out = File( "c:\\myfile" + str( counter ) )

fos = FileOutputStream( out )

more = 1

while more:

c = inp.read( )

if c == -1:

more = 0

else:

fos.write(c)

inp.close( )

fos.close( )

These script commands move the attachment contents into a file on the local file system. It is up to you to change the File( ) declaration above to something more meaningful.

 

print counter

counter+=1

If this message fails the isMimeType( ) method above so treat it like a simple text message with no attachments.

 

else:

mp = msg.getContent( )

print "mp=",mp

The getContent method gets the simple message body and displays it in the output window.

 

response.close( 1 )

Closes the connection to the host. The close method takes a single boolean parameter. When the parameter is set to True or 1 then the close command instructs the mail host to delete any flagged messages. A False or 0 parameter tells the mail host to ignore the deleted messages.

 

print "done"

 MailProtocol Handler Implementation

The MailProtocol handler object provides a simple interface to send and receive email messages in test agent scripts. The Mail object uses the underlying JAVA Mail API, the standard JAVA library for working with email protocols. The JAVA Mail API provides SMTP, POP3 and IMAP services. The JAVA Mail API implements a simple API for adding new email services over time. The docs.pushtotest.com document library hosts the JAVAdoc comments for the JAVA Mail API at http://docs.pushtotest.com/javamail/javadocs/

 

 

 

 

 

Additional documentation, product downloads and updates are at www.PushToTest.com. While the PushToTest testMaker software is distributed under an open-source license, the documenation remains (c) 2008 PushToTest. All rights reserved. PushToTest is a trademark of the PushToTest company.