Tuesday, June 12, 2012

TCL - Child interpreters

For most applications, a single interpreter and subroutines are quite sufficient. However, if you are building a client-server system (for example) you may need to have several interpreters talking to different clients, and maintaining their state. You can do this with state variables, naming conventions, or swapping state to and from disk, but that gets messy.
The interp command creates new child interpreters within an existing interpreter. The child interpreters can have their own sets of variables, commands and open files, or they can be given access to items in the parent interpreter.
If the child is created with the -safe option, it will not be able to access the file system, or otherwise damage your system. This feature allows a script to evaluate code from an unknown (and untrusted) source.
The names of child interpreters are a hierarchical list. If interpreter foo is a child of interpreter bar, then it can be accessed from the toplevel interpreter as {bar foo}.
The primary interpreter (what you get when you type tclsh) is the empty list {}.
The interp command has several subcommands and options. A critical subset is:
interp create -safe name
Creates a new interpreter and returns the name. If the -safe option is used, the new interpreter will be unable to access certain dangerous system facilities.
interp delete name
Deletes the named child interpreter.
interp eval args
This is similar to the regular eval command, except that it evaluates the script in the child interpreter instead of the primary interpreter. The interp eval command concatenates the args into a string, and ships that line to the child interpreter to evaluate.
interp alias srcPath srcCmd targetPath targetCmd arg arg
The interp alias command allows a script to share procedures between child interpreters or between a child and the primary interpreter.
Note that slave interpreters have a separate state and namespace, but do not have separate event loops. These are not threads, and they will not execute independently. If one slave interpreter gets stopped by a blocking I/O request, for instance, no other interpreters will be processed until it has unblocked.
The example below shows two child interpreters being created under the primary interpreter {}. Each of these interpreters is given a variable name which contains the name of the interpreter.
Note that the alias command causes the procedure to be evaluated in the interpreter in which the procedure was defined, not the interpreter in which it was evaluated. If you need a procedure to exist within an interpreter, you must interp eval a proc command within that interpreter. If you want an interpreter to be able to call back to the primary interpreter (or other interpreter) you can use the interp alias command.


set i1 [interp create firstChild]
set i2 [interp create secondChild]
puts "first child interp: $i1"
# Set a variable "name" in each
puts "second child interp: $i2\n " child interp, and # create a procedure within each interp
p eval $int [list set n
# to return that value foreach int [list $i1 $i2] { inte rame $int]
eval $int {proc nameis {} {global name; return "nameis: $name";} } } forea
interp ch int [list $i1 $i2] { interp eval $int "puts \"EVAL IN $int: name is \$name\""
o return the value of "name" # proc rtnName {} { global n
puts "Return from 'nameis' is: [interp eval $int nameis]" } # # A short program tame return "rtnName is: $name" } # # Alias that procedure to a proc in $i1 interp alias $i1 rtnName {} rtnName puts ""
stChild reports [interp eval $i1 rtnName]"
# This is an error. The alias causes the evaluation # to happen in the {} interpreter, where name is # not defined. puts "fi

TCL - More channel I/O - fblocked & fconfigure

The previous lessons have shown how to use channels with files and blocking sockets. Tcl also supports non-blocking reads and writes, and allows you to configure the sizes of the I/O buffers, and how lines are terminated.
A non-blocking read or write means that instead of a gets call waiting until data is available, it will return immediately. If there was data available, it will be read, and if no data is available, the gets call will return a 0 length.
If you have several channels that must be checked for input, you can use the fileevent command to trigger reads on the channels, and then use the fblocked command to determine when all the data is read.
The fblocked and fconfigure commands provide more control over the behavior of a channel.
The fblocked command checks whether a channel has returned all available input. It is useful when you are working with a channel that has been set to non-blocking mode and you need to determine if there should be data available, or if the channel has been closed from the other end.
The fconfigure command has many options that allow you to query or fine tune the behavior of a channel including whether the channel is blocking or non-blocking, the buffer size, the end of line character, etc.
fconfigure channel ?param1? ?value1? ?param2? ?value2?
Configures the behavior of a channel. If no param values are provided, a list of the valid configuration parameters and their values is returned.

If a single parameter is given on the command line, the value of that parameter is returned.
If one or more pairs of param/value pairs are provided, those parameters are set to the requested value.
Parameters that can be set include:
  • -blocking . . . Determines whether or not the task will block when data cannot be moved on a channel. (i.e. If no data is available on a read, or the buffer is full on a write).
  • -buffersize . . . The number of bytes that will be buffered before data is sent, or can be buffered before being read when data is received. The value must be an integer between 10 and 1000000.
  • -translation . . . Sets how Tcl will terminate a line when it is output. By default, the lines are terminated with the newline, carriage return, or newline/carriage return that is appropriate to the system on which the interpreter is running.

    This can be configured to be:
    • auto . . . Translates newline, carriage return, or newline/carriage return as an end of line marker. Outputs the correct line termination for the current platform.
    • binary . . Treats newlines as end of line markers. Does not add any line termination to lines being output.
    • cr . . . . Treats carriage returns as the end of line marker (and translates them to newline internally). Output lines are terminated with a carriage return. This is the Macintosh standard.
    • crlf . . . Treats cr/lf pairs as the end of line marker, and terminates output lines with a carriage return/linefeed combination. This is the Windows standard, and should also be used for all line-oriented network protocols.
    • lf . . . . Treats linefeeds as the end of line marker, and terminates output lines with a linefeed. This is the Unix standard.
The example is similar to the lesson 40 example with a client and server socket in the same script. It shows a server channel being configured to be non-blocking, and using the default buffering style - data is not made availalble to the script until a newline is present, or the buffer has filled.
When the first write:
puts -nonewline $sock "A Test Line"

is done, the fileevent triggers the read, but the gets can't read characters because there is no newline. The getsreturns a -1, and fblocked returns a 1. When a bare newline is sent, the data in the input buffer will become available, and the gets returns 18, and fblocked returns 0.


proc serverOpen {channel addr port} {
puts "channel: $channel - from Address: $addr Port: $port"
puts "The default state for blocking is: [fconfigure $channel -blocking]"
puts "The default buffer size is: [fconfigure $channel -buffersize ]"
g 0 set bl [fconfigure $channel -block
# Set this channel to be non-blocking. fconfigure $channel -blocki ning] puts "After fconfigure the state for blocking is: $bl" # Change the buffer size to be smaller
ize ]\n" # When input is availabl
fconfigure $channel -buffersize 12 puts "After Fconfigure buffer size is: [fconfigure $channel -buffer se, read it. fileevent $channel readable "readLine Server $channel" } proc readLine {who channel} { global didRead global blocked
$len Fblocked: $blocked" if {$len < 0} {
puts "There is input for $who on $channel" set len [gets $channel line] set blocked [fblocked $channel] puts "Characters Read: if {$blocked} { puts "Input is blocked" } else { puts "The socket was closed - closing my end" close $channel; } } else {
120 update; # This kicks MS-Windows machine
puts "Read $len characters: $line" puts $channel "This is a return" flush $channel; } incr didRead; } set server [socket -server serverOpen 33000] after s for this application set sock [socket 33000] set bl [fconfigure $sock -blocking] set bu [fconfigure $sock -buffersize] puts "Original setting for sock: Sock blocking: $bl buffersize: $bu"
\n" # Send a line to the ser
fconfigure $sock -blocking No fconfigure $sock -buffersize 8; set bl [fconfigure $sock -blocking] set bu [fconfigure $sock -buffersize] puts "Modified setting for sock: Sock blocking: $bl buffersize: $b uver -- NOTE flush set didRead 0 puts -nonewline $sock "A Test Line" flush $sock; # Loop until two reads have been done. while {$didRead < 2} { # Wait for didRead to be set vwait didRead if {$blocked} {
puts $sock "Newline" flush $sock puts "SEND NEWLINE" } } set len [gets $sock line] puts "Return line: $len -- $line" close $sock vwait didRead
catch {close $server}

TCL - Time and Date - clock

The clock command provides access to the time and date functions in Tcl. Depending on the subcommands invoked, it can acquire the current time, or convert between different representations of time and date.
The clock command is a platform independent method of getting the display functionality of the unix datecommand, and provides access to the values returned by a unix gettime() call.
clock seconds
The clock seconds command returns the time in seconds since the epoch. The date of the epoch varies for different operating systems, thus this value is useful for comparison purposes, or as an input to the clock format command.
clock format clockValue ?-gmt boolean? ?-format string?
The format subcommand formats a clockvalue (as returned by clock clicks into a human readable string.

The -gmt switch takes a boolean as the second argument. If the boolean is 1 or True, then the time will be formatted as Greenwich Mean Time, otherwise, it will be formatted as local time.
The -format option controls what format the return will be in. The contents of the string argument to format has similar contents as the format statement (as discussed in lesson 19, 33 and 34). In addition, there are several more %* descriptors that can be used to describe the output.
These include:
  • %a . . . . Abbreviated weekday name (Mon, Tue, etc.)
  • %A . . . . Full weekday name (Monday, Tuesday, etc.)
  • %b . . . . Abbreviated month name (Jan, Feb, etc.)
  • %B . . . . Full month name (January, February, etc.)
  • %d. . . . . Day of month
  • %j . . . . . Julian day of year
  • %m . . . . Month number (01-12)
  • %y. . . . . Year in century
  • %Y . . . . Year with 4 digits
  • %H . . . . Hour (00-23)
  • %I . . . . . Hour (00-12)
  • %M . . . . Minutes (00-59)
  • %S . . . . . Seconds(00-59)
  • %p . . . . . PM or AM
  • %D . . . . Date as %m/%d/%y
  • %r. . . . . Time as %I:%M:%S %p
  • %R . . . . Time as %I:%M
  • %T . . . . Time as %I:%M:%S
  • %Z . . . . Time Zone Name
clock scan dateString
The scan subcommand converts a human readable string to a system clock value, as would be returned byclock seconds

The dateString argument contains strings in these forms:
A time of day in one of the formats shown below. Meridian may be AM, or PM, or a capitalization variant. If it is not specified, then the hour (hh) is interpreted as a 24 hour clock. Zone may be a three letter description of a time zone, EST, PDT, etc.
  • hh:mm:ss ?meridian? ?zone?
  • hhmm ?meridian? ?zone?
A date in one of the formats shown below.
  • mm/dd/yy
  • mm/dd
  • monthname dd, yy
  • monthname dd
  • dd monthname yy
  • dd monthname
  • day, dd monthname yy


set systemTime [clock seconds]
uts "The time is: [clock format $systemTime -format %H:%M:%S]"
pputs "The date is: [clock format $systemTime -format %D]"
d of %B, %Y}] puts "\n the default format for the time is: [clock format
puts [clock format $systemTime -format {Today is: %A, the % $systemTime]\n" set halBirthBook "Jan 12, 1997" set halBirthMovie "Jan 12, 1992" set bookSeconds [clock scan $halBirthBook]
ancy of [expr {$bookSeconds - $movieSeconds}
set movieSeconds [clock scan $halBirthMovie] puts "The book and movie versions of '2001, A Space Oddysey' had a" puts "discre p] seconds in how"
have sentient computers like the HAL 9000"
puts "soon we woul d

TCL - Channel I/O: socket, fileevent, vwait

Tcl I/O is based on a the concept of channels. A channel is conceptually similar to a FILE * in C, or a stream in shell programming. The difference is that a channel may be a either a stream device like a file, or a connection oriented construct like a socket.
A stream based channel is created with the open command, as discussed in lesson 26. A socket based channel is created with a socket command. A socket can be opened as either as a client, or as a server.
If a socket channel is opened as a server, then the tcl program will 'listen' on that channel for another task to attempt to connect with it. When this happens, a new channel is created for that link (server-> new client), and the tcl program continues to listen for connections on the original port number. In this way, a single Tcl server could be talking to several clients simultaneously.
When a channel exists, a handler can be defined that will be invoked when the channel is available for reading or writing. This handler is defined with the fileevent command. When a tcl procedure does a gets or puts to a blocking device, and the device isn't ready for I/O, the program will block until the device is ready. This may be a long while if the other end of the I/O channel has gone off line. Using the fileevent command, the program only accesses an I/O channel when it is ready to move data.
Finally, there is a command to wait until an event happens. The vwait command will wait until a variable is set. This can be used to create a semaphore style functionality for the interaction between client and server, and let a controlling procedure know that an event has occurred.
Look at the example, and you'll see the socket command being used as both client and server, and the fileevent andvwait commands being used to control the I/O between the client and server.
Note in particular the flush commands being used. Just as a channel that is opened as a pipe to a command doesn't send data until either a flush is invoked, or a buffer is filled, the socket based channels don't automatically send data.
socket -server command ?options? port
The socket command with the -server flag starts a server socket listing on port port. When a connection occurs on port, the proc command is called with the arguments:
  • channel - The channel for the new client
  • address - The IP Address of this client
  • port The port that is assigned to this client
socket ?options? host port
The socket command without the -server option opens a client connection to the system with IP Address hostand port address port. The IP Address may be given as a numeric string, or as a fully qualified domain address.
To connect to the local host, use the address (the loopback address).
fileevent channelID readable ?script?
fileevent channelID writeable ?script?
The fileevent command defines a handler to be invoked when a condition occurs. The conditions are readable, which invokes script when data is ready to be read on channelID, and writeable, when channelID is ready to receive data. Note that end-of-file must be checked for by the script.
vwait varName
The vwait command pauses the execution of a script until some background action sets the value of varName. A background action can be a proc invoked by a fileevent, or a socket connection, or an event from a tk widget.


proc serverOpen {channel addr port} {
global connected
fileevent $chann
set connected 1 el readable "readLine Server $channel"
channel} { gl
puts "OPENED" } proc readLine {who obal didRead if { [gets $channel line] < 0} {
$channel;set out 1" } else
fileevent $channel readable {} after idle "close { puts "READ LINE: $line" puts $channel "This is a return" flush $channel;
0} server set
set didRead 1 } } set connected 0 # catch {socket -server serverOpen 330 0server [socket -server serverOpen 33000] after 100 update
s $sock "A Test Line" flush $sock vwait
set sock [socket -async 33000] vwait connected pu tdidRead set len [gets $sock line] puts "Return line: $len -- $line" catch {close $sock} vwait out
close $server

TCL - Timing scripts

The simplest method of making a script run faster is to buy a faster processor. Unfortunately, this isn't always an option. You may need to optimize your script to run faster. This is difficult if you can't measure the time it takes to run the portion of the script that you are trying to optimize.
The time command is the solution to this problem. time will measure the length of time that it takes to execute a script. You can then modify the script, rerun time and see how much you improved it.
After you've run the example, play with the size of the loop counters in timetst1 and timetst2. If you make the inner loop counter 5 or less, it may take longer to execute timetst2 than it takes for timetst1. This is because it takes time to calculate and assign the variable k, and if the inner loop is too small, then the gain in not doing the multiply inside the loop is lost in the time it takes to do the outside the loop calculation.
time script ?count?
Returns the number of milliseconds it took to execute script. If count is specified, it will run the script counttimes, and average the result. The time is elapsed time, not CPU time.


proc timetst1 {lst} {
set x [lsearch $lst "5000"]
return $x }
ray} { upvar $array
proc timetst2 {a r a return $a(5000); }
ge array. for {set i 0} {$i < 5001} {
# Make a long list and a la rincr i} { set array($i) $i lappend list $i }
" puts "Time for array index: [ time {timetst2 array} 10
puts "Time for list search: [ time {timetst1 $list} 10

TCL - Command line arguments and environment strings

Scripts are much more useful if they can be called with different values in the command line.
For instance, a script that extracts a particular value from a file could be written so that it prompts for a file name, reads the file name, and then extracts the data. Or, it could be written to loop through as many files as are in the command line, and extract the data from each file, and print the file name and data.
The second method of writing the program can easily be used from other scripts. This makes it more useful.
The number of command line arguments to a Tcl script is passed as the global variable argc . The name of a Tcl script is passed to the script as the global variable argv0 , and the rest of the command line arguments are passed as a list in argv. The name of the executable that runs the script, such as tclsh is given by the command info nameofexecutable
Another method of passing information to a script is with environment variables. For instance, suppose you are writing a program in which a user provides some sort of comment to go into a record. It would be friendly to allow the user to edit their comments in their favorite editor. If the user has defined an EDITOR environment variable, then you can invoke that editor for them to use.
Environment variables are available to Tcl scripts in a global associative array env . The index into env is the name of the environment variable. The command puts "$env(PATH)" would print the contents of the PATH environment variable.


puts "There are $argc arguments to this script"
puts "The name of this script is $argv0"
are: $argv" } puts "You have these environment variab
if {$argc > 0} {puts "The other argument sles set:" foreach index [array names env] { puts "$index: $env($index)"

TCL - More Debugging - trace

When you are debugging Tcl code, sometimes it's useful to be able to trace either the execution of the code, or simply inspect the state of a variable when various things happen to it. The trace command provides these facilities. It is a very powerful command that can be used in many interesting ways. It also risks being abused, and can lead to very difficult to understand code if it is used improperly (for instance, variables seemingly changing magically), so use it with care.
There are three principle operations that may be performed with the trace command:
  • add, which has the general form: trace add type ops ?args?
  • info, which has the general form: trace info type name
  • remove, which has the general form: trace remove type name opList command
Which are for adding traces, retrieving information about traces, and removing traces, respectively. Traces can be added to three kinds of "things":
  • variable - Traces added to variables are called when some event occurs to the variable, such as being written to or read.
  • command - Traces added to commands are executed whenever the named command is renamed or deleted.
  • execution - Traces on "execution" are called whenever the named command is run.
Traces on variables are invoked on four separate conditions - when a variable is accessed or modified via the arraycommand, when the variable is read or written, or when it's unset. For instance, to set a trace on a variable so that when it's written to, the value doesn't change, you could do this:
proc vartrace {oldval varname element op} {
upvar $varname localvar
set tracedvar 1 trace ad
set localvar $oldval }
list vartrace $tracedvar] set tracedvar 2 puts "trac
d variable tracedvar write
[edvar is $tracedvar"
In the above example, we create a proc that takes four arguments. We supply the first, the old value of the variable, because write traces are triggered after the variable's value has already been changed, so we need to preserve the original value ourselves. The other three arguments are the variable's name, the element name if the variable is an array (which it isn't in our example), and the operation to trace - in this case, write. When the trace is called, we simply set the variable's value back to its old value. We could also do something like generate an error, thus warning people that this variable shouldn't be written to. Infact, this would probably be better. If someone else is attempting to understand your program, they could become quite confused when they find that a simple setcommand no longer functions!
The command and execution traces are intended for expert users - perhaps those writing debuggers for Tcl in Tcl itself - and are therefore not covered in this tutorial, see the trace man page for further information.


proc traceproc {variableName arrayElement operation} {
set op(write) Write set op(unset) Unset
] incr level -1
set op(read) Read set level [info leve l if {$level > 0} { set procid [info level $level] } else {
{ puts "TRACE: $
set procid "main" } if {![string match $arrayElement ""] }op($operation) $variableName($arrayElement) in $procid" } else {
testProc {input1 input2} { upvar $input1 i upva
puts "TRACE: $op($operation) $variableName in $procid" } } proc r $input2 j set i 2 set k $j } trace add variable i1 write traceproc trace add variable i2 read traceproc
1: [trace info variable i1]" puts "Tr
trace add variable i2 write traceproc set i2 "testvalue" puts "\ncall testProc" testProc i1 i2 puts "\nTraces on iaces on i2: [trace info variable i2]\n" trace remove variable i2 read traceproc puts "Traces on i2 after vdelete: [trace info variable i2]"
puts "\ncall testProc again"
testProc i1 i2

TCL - Debugging and Errors - errorInfo errorCode catch error return

In previous lessons we discussed how the return command could be used to return a value from a proc. In Tcl, a proc may return a value, but it always returns a status.
When a Tcl command or procedure encounters an error during its execution, the global variable errorInfo is set, and an error condition is generated. If you have proc a that called proc b that called c that called d , if d generates an error, the "call stack" will unwind. Since d generates an error, c will not complete execution cleanly, and will have to pass the error up to b , and in turn on to a. Each procedure adds some information about the problem to the report. For instance:
proc a {} {
b }
} { c }
proc b { proc c {} { d
} proc d {} { s oe_command }
Produces the following output:
invalid command name "some_command"
while executing "some_command"
ked from within "d" (p
(procedure "d" line 2) inv orocedure "c" line 2) invoked from within "c"
"b" (procedure "a" li
(procedure "b" line 2) invoked from withi nne 2) invoked from within "a" (file "errors.tcl" line 16)
This actually occurs when any exception condition occurs, including break and continue. The break and continuecommands normally occur within a loop of some sort, and the loop command catches the exception and processes it properly, meaning that it either stops executing the loop, or continues on to the next instance of the loop without executing the rest of the loop body.
It is possible to "catch" errors and exceptions with the catch command, which runs some code, and catches any errors that code happens to generate. The programmer can then decide what to do about those errors and act accordingly, instead of having the whole application come to a halt.
For example, if an open call returns an error, the user could be prompted to provide another file name.
A Tcl proc can also generate an error status condition. This can be done by specifying an error return with an option to the return command, or by using the error command. In either case, a message will be placed in errorInfo, and the proc will generate an error.
error message ?info? ?code?
Generates an error condition and forces the Tcl call stack to unwind, with error information being added at each step.
If info or code are provided, the errorInfo and errorCode variables are initialized with these values.
catch script ?varName?
Evaluates and executes script. The return value of catch is the status return of the Tcl interpreter after it executes script If there are no errors in script, this value is 0. Otherwise it is 1.
If varName is supplied, the value returned by script is placed in varName if the script successfully executes. If not, the error is placed in varName.
return ?-code code? ?-errorinfo info? ?-errorcode errorcode? ?value?
Generates a return exception condition. The possible arguments are:
-code code
The next value specifies the return status. code must be one of:
  • ok - Normal status return
  • error - Proc returns error status
  • return - Normal return
  • break - Proc returns break status
  • continue - Proc returns continue status
These allow you to write procedures that behave like the built in commands breakerror, and continue.
-errorinfo info
info will be the first string in the errorInfo variable.
-errorcode errorcode
The proc will set errorCode to errorcode.
The string value will be the value returned by this proc.
errorInfo is a global variable that contains the error information from commands that have failed.
errorCode is a global variable that contains the error code from command that failed. This is meant to be in a format that is easy to parse with a script, so that Tcl scripts can examine the contents of this variable, and decide what to do accordingly.


proc errorproc {x} {
if {$x > 0} {
rated by error" "Info String for error" $x } } catch er
error "Error gen erorproc puts "after bad proc call: ErrorCode: $errorCode"
rrorproc 0} puts "after proc ca
puts "ERRORINFO:\n$errorInfo\n" set errorInfo ""; catch { ell with no error: ErrorCode: $errorCode" puts "ERRORINFO:\n$errorInfo\n"
ode" puts "ERRORINF
catch {errorproc 2} puts "after error generated in proc: ErrorCode: $error CO:\n$errorInfo\n" proc returnErr { x } {
nerates This" -errorcode "-999" } catch {returnErr 2} puts "after proc tha
return -code error -errorinfo "Return G et uses return to generate an error: ErrorCode: $errorCode" puts "ERRORINFO:\n$errorInfo\n" proc withError {x} {
rInfo\n" ca
set x $a } catch {withError 2} puts "after proc with an error: ErrorCode: $errorCode" puts "ERRORINFO:\n$err otch {open [file join no_such_directory no_such_file] r} puts "after an error call to a nonexistent file:"
puts "ErrorCode: $errorCode"
puts "ERRORINFO:\n$errorInfo \

TCL - Changing Working Directory - cd, pwd

Tcl also supports commands to change and display the current working directory.
These are:
cd ?dirName?
Changes the current directory to dirName (if dirName is given, or to the $HOME directory if dirName is not given. If dirName is a tilde (~cd changes the working directory to the users home directory. If dirName starts with a tilde, then the rest of the characters are treated as a login id, and cd changes the working directory to that user's $HOME.
Returns the current directory.


set dirs [list TEMPDIR]
uts "[format "%-15s %-20s " "FILE" "DIRECTORY"]"
p foreach dir $dirs { catch {cd $dir}
foreach name $c_files { puts
set c_files [glob -nocomplain c*] "[format "%-15s %-20s " $name [pwd]]" }

TCL - Substitution without evaluation - format, subst

The Tcl interpreter does only one substitution pass during command evaluation. Some situations, such as placing the name of a variable in a variable, require two passes through the substitution phase. In this case, the substcommand is useful.
Subst performs a substitution pass without performing any execution of commands except those required for the substitution to occur, ie: commands within [] will be executed, and the results placed in the return string.
In the example code:
puts "[subst $$c]\n"
shows an example of placing a variable name in a variable, and evaluating through the indirection.
The format command can also be used to force some levels of substitution to occur.
subst ?-nobackslashes? ?-nocommands? ?-novariables? string
Passes string through the Tcl substitution phase, and returns the original string with the backslash sequences, commands and variables replaced by their equivalents.

If any of the -no... arguments are present, then that set of substitutions will not be done.
NOTE: subst does not honor braces or quotes.


set a "alpha"
set b a
nd b with no substitution: $a $$b} puts "a
puts {a aand b with one pass of substitution: $a $$b"
$$b}]" puts "a and b with subst in quotes: [subst "$
puts "a and b with subst in braces: [subst {$ aa $$b"]\n" puts "format with no subst [format {$%s} $b]"
uts \"eval after format: [format {$%s} $b]\"" set
puts "format with subst: [subst [format {$%s} $b]]" eval " pnum 0; set cmd "proc tempFileName {} " set cmd [format "%s {global num; incr num;" $cmd]
eval $cmd puts "[info body tempFileName]" set a arrayname
set cmd [format {%s return "/tmp/TMP.%s.$num"} $cmd [pid] ] set cmd [format "%s }" $cmd ] set b index set c newvalue
eval [format "set %s(%s) %s" $a $b $c]
puts "Index: $b of $a was set to: $arra y

TCL - More command construction - format, list

There may be some unexpected results when you try to compose command strings for eval.
For instance
eval puts OK
would print the string OK. However,
eval puts Not OK
will generate an error.
The reason that the second command generates an error is that the eval uses concat to merge its arguments into a command string. This causes the two words Not OK to be treated as two arguments to puts. If there is more than one argument to puts, the first argument must be a file pointer.
Correct ways to write the second command include these:
eval [list puts {Not OK}]
eval [list puts "Not OK"]
md {Not OK}; eval $cmd
set cmd "puts" ; lappend c
As long as you keep track of how the arguments you present to eval will be grouped, you can use many methods of creating the strings for eval, including the string commands and format.
The recommended methods of constructing commands for eval is to use the list and lappend commands. These commands become difficult to use, however if you need to put braces in the command, as was done in the previous lesson.
The example from the previous lesson is re-implemented in the example code using lappend.
The completeness of a command can be checked with info completeInfo complete can also be used in an interactive program to determine if the line being typed in is a complete command, or the user just entered a newline to format the command better.
info complete string
If string has no unmatched brackets, braces or parentheses, then a value of 1 is returned, else 0 is returned.


set cmd "OK"
eval puts $cmd
et cmd "puts" ; lappend cmd {Also OK}; eval $cmd
s set cmd "NOT OK" eval puts $cmd
his Works"] set cmd "And even this can be mad
eval [format {%s "%s"} puts "Even Te to work" eval [format {%s "%s"} puts $cmd ] set tmpFileNum 0;
m; incr num; return \"/tmp/T
set cmd {proc tempFileName } lappend cmd "" lappend cmd "global n uMP.[pid].\$num\"" eval $cmd puts "\nThis is the body of the proc definition:"
ete $cmd]} { eval $cmd } else
puts "[info body tempFileName]\n" set cmd {puts "This is Cool!} if {[info comp l { puts "INCOMPLETE COMMAND: $cmd"

TCL - Creating Commands - eval

One difference between Tcl and most other compilers is that Tcl will allow an executing program to create new commands and execute them while running.
A tcl command is defined as a list of strings in which the first string is a command or proc. Any string or list which meets this criteria can be evaluated and executed.
The eval command will evaluate a list of strings as though they were commands typed at the % prompt or sourced from a file. The eval command normally returns the final value of the commands being evaluated. If the commands being evaluated throw an error (for example, if there is a syntax error in one of the strings), then eval will will throw an error.
Note that either concat or list may be used to create the command string, but that these two commands will create slightly different command strings.
eval arg1 ??arg2?? ... ??argn??
Evaluates arg1 - argn as one or more Tcl commands. The args are concatenated into a string, and passed totcl_Eval to evaluate and execute.

Eval returns the result (or error code) of that evaluation.


set cmd {puts "Evaluating a puts"}
puts "CMD IS: $cmd" eval $cmd
ewProcA] ""] } { puts "\nDefining newProcA
if {[string match [info procs nfor this invocation" set num 0; set cmd "proc newProcA "
l num;\n"] set cmd [concat $cm
set cmd [concat $cmd "{} {\n"] set cmd [concat $cmd "glob ad "incr num;\n"] set cmd [concat $cmd " return \"/tmp/TMP.[pid].\$num\";\n"]
n[info body newProcA]\n" put
set cmd [concat $cmd "}"] eval $cmd } puts "\nThe body of newProcA is: \s "newProcA returns: [newProcA]" puts "newProcA returns: [newProcA]" # # Define a proc using lists #
et cmd "proc newProcB " lappend cmd {}
if {[string match [info procs newProcB] ""] } { puts "\nDefining newProcB for this invocation" slappend cmd {global num; incr num; return $num;} eval $cmd } puts "\nThe body of newProcB is: \n[info body newProcB]\n"
puts "newProcB returns: [newProcB]"

TCL - Building reusable libraries - packages and namespaces

The previous lesson showed how the source command can be used to separate a program into multiple files, each responsible for a different area of functionality. This is a simple and useful technique for achieving modularity. However, there are a number of drawbacks to using the source command directly. Tcl provides a more powerful mechanism for handling reusable units of code called packages. A package is simply a bundle of files implementing some functionality, along with a name that identifies the package, and a version number that allows multiple versions of the same package to be present. A package can be a collection of Tcl scripts, or a binary library, or a combination of both. Binary libraries are not discussed in this tutorial.

Using packages

The package command provides the ability to use a package, compare package versions, and to register your own packages with an interpreter. A package is loaded by using the package require command and providing the packagename and optionally a version number. The first time a script requires a package Tcl builds up a database of available packages and versions. It does this by searching for package index files in all of the directories listed in thetcl_pkgPath and auto_path global variables, as well as any subdirectories of those directories. Each package provides a file called pkgIndex.tcl that tells Tcl the names and versions of any packages in that directory, and how to load them if they are needed.
It is good style to start every script you create with a set of package require statements to load any packages required. This serves two purposes: making sure that any missing requirements are identified as soon as possible; and, clearly documenting the dependencies that your code has. Tcl and Tk are both made available as packages and it is a good idea to explicitly require them in your scripts even if they are already loaded as this makes your scripts more portable and documents the version requirements of your script.

Creating a package

There are three steps involved in creating a package:
  • Adding a package provide statement to your script.
  • Creating a pkgIndex.tcl file.
  • Installing the package where it can be found by Tcl.
The first step is to add a package provide statement to your script. It is good style to place this statement at the top of your script. The package provide command tells Tcl the name of your package and the version being provided.
The next step is to create a pkgIndex.tcl file. This file tells Tcl how to load your package. In essence the index file is simply a Tcl file which is loaded into the interpreter when Tcl searches for packages. It should use the package ifneeded command register a script which will load the package when it is required. The pkgIndex.tcl file is evaluated globally in the interpreter when Tcl first searches for any package. For this reason it is very bad style for an index script to do anything other than tell Tcl how to load a package; index scripts should not define procs, require packages, or perform any other action which may affect the state of the interpreter.
The simplest way to create a pkgIndex.tcl script is to use the pkg_mkIndex command. The pkg_mkIndex command scans files which match a given pattern in a directory looking for package provide commands. From this information it generates an appropriate pkgIndex.tcl file in the directory.
Once a package index has been created, the next step is to move the package to somewhere that Tcl can find it. The tcl_pkgPath and auto_path global variables contain a list of directories that Tcl searches for packages. The package index and all the files that implement the package should be installed into a subdirectory of one of these directories. Alternatively, the auto_path variable can be extended at run-time to tell Tcl of new places to look for packages.
package require ?-exact? name ?version?
Loads the package identified by name. If the -exact switch is given along with a version number then only that exact package version will be accepted. If a version number is given, without the -exact switch then any version equal to or greater than that version (but with the same major version number) will be accepted. If no version is specified then any version will be loaded. If a matching package can be found then it is loaded and the command returns the actual version number; otherwise it generates an error.
package provide name ?version?
If a version is given this command tells Tcl that this version of the package indicated by name is loaded. If a different version of the same package has already been loaded then an error is generated. If the versionargument is omitted, then the command returns the version number that is currently loaded, or the empty string if the package has not been loaded.
pkg_mkIndex ?-direct? ?-lazy? ?-load pkgPat? ?-verbose? dir ?pattern pattern ...?
Creates a pkgIndex.tcl file for a package or set of packages. The command works by loading the files matching the patterns in the directory, dir and seeing what new packages and commands appear. The command is able to handle both Tcl script files and binary libraries (not discussed here).


One problem that can occur when using packages, and particularly when using code written by others is that ofname collision. This happens when two pieces of code try to define a procedure or variable with the same name. In Tcl when this occurs the old procedure or variable is simply overwritten. This is sometimes a useful feature, but more often it is the cause of bugs if the two definitions are not compatible. To solve this problem, Tcl provides anamespace command to allow commands and variables to be partitioned into separate areas, called namespaces. Each namespace can contain commands and variables which are local to that namespace and cannot be overwritten by commands or variables in other namespaces. When a command in a namespace is invoked it can see all the other commands and variables in its namespace, as well as those in the global namespace. Namespaces can also contain other namespaces. This allows a hierarchy of namespaces to be created in a similar way to a file system hierarchy, or the Tk widget hierarchy. Each namespace itself has a name which is visible in its parent namespace. Items in a namespace can be accessed by creating a path to the item. This is done by joining the names of the items with ::. For instance, to access the variable bar in the namespace foo, you could use the path foo::bar. This kind of path is called a relative path because Tcl will try to follow the path relative to the current namespace. If that fails, and the path represents a command, then Tcl will also look relative to the global namespace. You can make a path fully-qualified by describing its exact position in the hierachy from the global namespace, which is named ::. For instance, if our foo namespace was a child of the global namespace, then the fully-qualified name of bar would be::foo::bar. It is usually a good idea to use fully-qualified names when referring to any item outside of the current namespace to avoid surprises.
A namespace can export some or all of the command names it contains. These commands can then be imported into another namespace. This in effect creates a local command in the new namespace which when invoked calls the original command in the original namespace. This is a useful technique for creating short-cuts to frequently used commands from other namespaces. In general, a namespace should be careful about exporting commands with the same name as any built-in Tcl command or with a commonly used name.
Some of the most important commands to use when dealing with namespaces are:
namespace eval path script
This command evaluates the script in the namespace specified by path. If the namespace doesn't exist then it is created. The namespace becomes the current namespace while the script is executing, and any unqualified names will be resolved relative to that namespace. Returns the result of the last command in script.
namespace delete ?namespace namespace ...?
Deletes each namespace specified, along with all variables, commands and child namespaces it contains.
namespace current
Returns the fully qualified path of the current namespace.
namespace export ?-clear? ?pattern pattern ...?
Adds any commands matching one of the patterns to the list of commands exported by the current namespace. If the -clear switch is given then the export list is cleared before adding any new commands. If no arguments are given, returns the currently exported command names. Each pattern is a glob-style pattern such as *[a-z]*, or *foo*.
namespace import ?-force? ?pattern pattern ...?
Imports all commands matching any of the patterns into the current namespace. Each pattern is a glob-style pattern such as foo::*, or foo::bar.

Using namespace with packages

William Duquette has an excellent guide to using namespaces and packages athttp://www.wjduquette.com/tcl/namespaces.html. In general, a package should provide a namespace as a child of the global namespace and put all of its commands and variables inside that namespace. A package shouldn't put commands or variables into the global namespace by default. It is also good style to give your package and the namespace it provides the same name, to avoid confusion.


This example creates a package which provides a stack data structure.
# Register the package
package provide tutstack 1.0
package require Tcl 8.5
# Create the namespace name space eval ::tutstack {
namespace export cr
# Export commands eate destroy push pop peek empty
ck variable id
# Set up state variable st a 0 } # Create a new stack
variable stack vari
proc ::tutstack::create {} {able id set token "stack[incr id]"
token } # Destroy a stack p
set stack($token) [list] return $roc ::tutstack::destroy {token} { variable stack
ck proc ::tutstack::pus
unset stack($token) } # Push an element onto a st ah {token elem} { variable stack lappend stack($token) $elem }
tack set num [llengt
# Check if stack is empty proc ::tutstack::empty {token} { variable sh $stack($token)] return [expr {$num == 0}] }
t proc ::tutstack::peek {token} { variable stack
# See what is on top of the stack without removing i if {[empty $token]} { error "stack empty" } return [lindex $stack($token) end] }
set ret [peek $token] set stack($token
# Remove an element from the top of the stack proc ::tutstack::pop {token} { variable stack ) [lrange $stack($token) 0 end-1] return $ret
And some code which uses it:
package require tutstack 1.0
et stack [tutstack::create]
sforeach num {1 2 3 4 5} { tutstack::push $stack $num }
while { ![tutstack::empty $stack] } {
tstack::destroy $stack
puts "[tutstack::pop $stack]" } t


A common way of structuring related commands is to group them together into a single command with sub-commands. This type of command is called an ensemble command, and there are many examples in the Tcl standard library. For instance, the string command is an ensemble whose sub-commands are lengthindexmatch etc. Tcl 8.5 introduced a handy way of converting a namespace into an ensemble with the namespace ensemble command. This command is very flexible, with many options to specify exactly how sub-commands are mapped to commands within the namespace. The most basic usage is very straightforward, however, and simply creates an ensemble command with the same name as the namespace and with all exported procedures registered as sub-commands. To illustrate this, we will convert our stack data structure into an ensemble:
package require tutstack 1.0
package require Tcl 8.5
namespace eval ::tutstack {
mand namespace ensemble creat
# Create the ensemble co me } # Now we can use our stack through the ensemble command
push $stack $num } while
set stack [tutstack create] foreach num {1 2 3 4 5} { tutstac k{ ![tutstack empty $stack] } { puts "[tutstack pop $stack]" }
tutstack destroy $stack
As well as providing a nicer syntax for accessing functionality in a namespace, ensemble commands also help to clearly distinguish the public interface of a package from the private implementation details, as only exported commands will be registered as sub-commands and the ensemble will enforce this distinction. Readers who are familiar with object-oriented programming (OOP) will realise that the namespace and ensemble mechanisms provide many of the same encapsulation advantages. Indeed, many OO extensions for Tcl build on top of the powerful namespace mechanism.

Popular Posts