Saturday, December 29, 2012

Unix Shell Script

exec command &      后台运行且从ps -ef上能看到该command
. command              前台sibling方式运行,如果被另一个ksh以. command方式调用,ps -ef上不能看到该command
command                child方式运行,ps -ef不能看到该command

$ ls -l /dev | grep ’^b’
$ ls [x-z]4*
y4sale z4
$ ls [!a-y]*
z4 zed
$ history
$ r 10 # repeat command number 10
repeat recently issued commands is to press Esc/K over and over
If you only know the first letter or first few letters of a command, you can type
<esc>\ and the KornShell will complete the remaining letters for you. ?
$ var1=”Brown”
$ print "$var1"
commonly used KornShell reserved variables:
dtci-ndadabin01:speng $ echo $PS1
dtci-ndadabin01:${LOGNAME} $
dtci-ndadabin01:speng $ echo $PS2
>
dtci-ndadabin01:speng $ echo $PPID
10356
PWD Present working directory set by the cd command
RANDOM A random integer from 1 to 32767
If you do export var, then shell scripts can access the value of var.
$ export #list environment
To run a program in the background, place an ampersand (&) at the end of the command line
Only one command at a time can run in the foreground; however, multiple
commands can simultaneously run in the background. The background is a good
place to run time-consuming commands.
If you run a pipeline in the background, both processes show up in ps
The following code example shows how to wait for a background process to complete.
$ wait %1
vi Refresher
J Join two lines
p Paste the contents of the paste buffer
4yy Copy this line and the next three lines into the paste buffer
3dd Delete this line and the next two lines; place the deleted lines into the paste buffer

fork() and exec()
The fork() command creates a new process that runs the same program as the
parent (a new nest with the same egg).
The exec() command replaces the program in the process with a new program.
A KornShell script can be executed in either of the following two ways:
! As an argument to ksh
$ ksh myshellscript
To run a shell script as an argument to ksh, the user invoking the shell script must
have read permission on the shell script file.
! By name
To run a shell script by name, simply type its pathname:
$ myshellscript
To run a shell script by name, the user invoking the shell script must have read
and execute permission on the shell script file.

Positional Parameters
Parameter Assignment
$# Total number of positional parameters
$* All positional parameters
$0 The filename of the KornShell script
$1 The first positional parameter
$2 The second positional parameter
$3 The third positional parameter

${10} The tenth positional parameter

Use the shift statement to slide positional parameters.
For example, the following script uses the positional parameters to analyze the
command line arguments passed by the user.
Example
$ cat shifty
print “$1”
shift
print “$1”
shift
print “$1”
$ shifty HELLO THERE
HELLO
THERE

$ cat -n save.ksh
1 while [[ -a $1 ]]
2 do
3 cp $1 $1.save
4 shift
5 done
$ ksh -x sav.ksh abc.dat xyz # run in trace mode
+ [[ -a abc.dat ]]
+ cp abc.dat abc.dat.save
+ shift
+ [[ -a xyz ]]
+ cp xyz xyz.save
+ shift
+ [[ -a ]]

Provide a default value for a parameter to be used if the parameter is not set, as
shown below.
$ cat myscript
print ${1:-abc}
print ${2:-mouse}

Escape SequenceWhat It Does
" "Turns off the special significance of all enclosed characters except $ ` " and \
´ ´Turns off the special significance of all enclosed characters
\Escape the character that comes immediately after the \


 
The KornShell provides four techniques for testing arguments:
1. [[ ]] — new with KornShell (preferred for string tests)
2. (( )) — new with KornShell (preferred for math tests)
3. test command — Bourne shell and KornShell
4. []— Bourne shell & KornShell
All four techniques test arguments and assign the outcome of that test to the
special shell variable $?. The value of $? will be either:
zero SUCCESS
or
nonzero FAILURE
Using the (( )) Command
Sample usage of the (( )) command is shown below.
$ XX=17
$ (( $XX == 17 ))
$ print $?
0
Using the test Command
Sample usage of the test command is shown below.
$ XX=17
$ test $XX -eq 17
$ print $?
0
Using the [ ] Command
Sample usage of the [] command is shown below.
$ XX=17
$ [ $XX -eq 17 ]
$ print $?
0
Note: Spaces around [[ and ]] are required, as shown below.
$ [[“$response” = “Yes” ]]
ksh: [[: not found
 
Integer Comparison Operators
Operator Returns
n1 == n2 Success if integers n1 and n2 are equal
n1 != n2 Success if integers n1 and n2 are not equal
n1 > n2 Success if integer n1 is greater than integer n2
n1 >= n2 Success if integer n1 is greater than or equal to integer n2
n1 < n2 Success if integer n1 is less than integer n2
n1 <= n2 Success if integer n1 is less than or equal to integer n2

String Comparison
Operator Returns
-z s1 Success if length of string s1 is zero
-n s1 Success if length of string s1 is non-zero
s1 = s2 Success if strings s1 and s2 are identical
s1 != s2 Success if strings s1 and s2 are not identical
s1 < s2 Success if string s1 comes before string s2 based on their ASCII values
s1 > s2 Success if string s1 comes after string s2 based on their ASCII values

File Enquiry Operators
Operator Returns
-a file Success if file exists
-r file Success if file exists and is readable
-w file Success if file exists and is writable
-x file Success if file exists and is executable
-f file Success if file exists and is a regular file (as opposed to a directory)
-d file Success if file exists and is a directory
-s file Success if file exists and has a size greater than zero
file1 -nt file2 Success if file1 is newer than file2
file1 -ot file2 Success if file1 is older than file2
file1 -ef file2 Success if file1 is another name for file2

The following logic operators are shown in decreasing order of precedence.
Operator What It Does
! Unary negation operator
&& Logical AND operator
|| Logical OR operator

if [[ -f $1 ]]
then
print "$1 is a plain file"
elif [[ -d $1 ]]
then
print "$1 is a directory file"
else
print "$1 is neither a plain file nor a directory"
fi

while [[ -n $1 ]] # loop until there are no more arguments
do
if [[ -r $1 ]]
then
cp $1 $NEWDIR
fi
shift # get next command line argument
done

while true
do
lines executed in an infinite loop
done
true always returns a 0 (success) value.


The for loop of KornShell is somewhat different than the for loop of most other
languages.
In most other languages, the for loop initializes a variable to a numerical value
and then increments or decrements that value with each iteration of the loop. In the
KornShell, the for loop iterates through a collection of strings or filenames.


# The following for loop creates fa.new, fb.new, fc.new and fd.new:
for FILE in fa fb fc fd
do
cp ${FILE} ${FILE}.new
done

The following is a special case of for that uses positional parameters.
# The following for loop iterates one time for each command
# line argument:
for FILE
do
cp ${FILE} ${FILE}new
done
Note:
for FILE
is equivalent to:
for FILE in $*

Invoking the shell with the -x option provides an execute trace. Invoking the shell
with the -v option provides a verbose trace.

Verbose trace prints shell commands as they are read, and is useful for syntax checking.
An example of using execute trace inside a shell script is shown below.
Example
$ cat save
while [[ -a $1 ]]
do
set -o xtrace # turn on execute trace
cp $1 $1.save
set +o xtrace # turn off execute trace
shift
done
$ save abc.dat xyz
+ cp abc.dat abc.dat.save
+ cp xyz xyz.save

Assign the standard output of command to var, as shown below.
var=$(command)
$ foo=$(ls s*) # assign the output of ls s* to variable foo
$ print $foo
s.cnv.c start superx

Instead of running a KornShell script as a child, you can run it as a sibling.
Although the second script is termed “sibling,” the original script remains the
parent. By running as a sibling, the script can change the parent’s environment.
To run as a sibling, precede the KornShell script with a . (dot) and then a space.

cmd1 && cmd2 execute cmd2 only if cmd1 succeeds.
$ who | grep -q sam && print "sam is logged on"
cmd1 || cmd2 execute cmd2 only if cmd1 fails.
$ who | grep -q sam || print "sam NOT logged on"

其中单引号更严格一些,它可以防止任何变量扩展;而双引号可以防止通配符扩展但允许变量扩展.此外还有一种防止这种扩展的方法,即使用转义字符——反斜杆:\
if [ $# -lt 3 ] ; then 表达式判断输入命令行参数是否小于3 (特殊变量$# 表示包含参数的个数

shell也有一个真正的调试模式,如果脚本"strangescript"出错,可以使用如下命令进行调试:
sh -x strangescript
上述命令会执行该脚本,同时显示所有变量的值。
shell还有一个不执行脚本只检查语法的模式,命令如下:
sh -n your_script
这个命令会返回所有语法错误。


-f1,7  copies  the  first  and seventh field
                only


Programs written for the Bourne
shell can run under the Korn shell without modification.

Get ksh version
dtci-ndadabin01:speng $ what /bin/ksh
/bin/ksh:
   Version M-11/16/88i
   SunOS 5.10 Generic 118872-04 Aug 2006

$ for i in $(ls); do cp $i $i.bak; done
If a command is terminated with a \ character, it is continued on the next line.

If two commands are separated with &&, the second command is
only executed if the first command returns a zero exit status. Here,
the echo command is executed, because ls ran successfully:
$ ls temp && echo "File temp exists"
If two commands are separated with | |, the second command is only
executed if the first command returns a non-zero exit status.

You can implement a simple if command by using the &&
and | | operators together like this:
command1 && command2 | | command3

echo "This is file temp:";cat temp|nl
{ echo "This is file temp:";cat temp; }|nl  #By using {}'s, the output from both commands is line numbered:
There must be whitespace after the opening {, or else you get a syntax
error. One more restriction: commands within the {}'s must be
terminated with a semi-colon when given on one line.

By using the > symbol, the standard output of a command can be redirected to a file, If the file doesn't exist, then it is created. Otherwise, the contents are
usually overwritten.
Standard output can be appended to a file by using the >> redirect operator.
Standard output is closed with the >&– redirect operator:
$ echo "This is going nowhere" >&-
$
The >| operator is used to force overwriting of a file, even if the
noclobber option is enabled. Here, file.out can now be overwritten,
even though the noclobber option is set:
$ ls >| file.out

The
Korn shell automatically assigns file descriptor 0 to standard input for reading, file descriptor 1 to standard output for writing, and file
descriptor 2 to standard error for reading and writing.
$ ls tmp t.out 2>ls.out
There can be no space between the 2 and > symbol, otherwise the 2 is interpreted as an argument to the command.
$ ls tmp t.out 2>ls.out 1>&2
In the command, the standard error and output are sent to ls.out by specifying multiple redirections on the same command line. First,
>&2 causes standard output to be redirected to standard error. Then,2>ls.out causes standard error (which is now standard output and
standard error) to be sent to ls.out:
$ { echo "This is going to stdout" >&1 ; \
> echo "This is going to stderr" >&2 ; } >out 2>&1
$ cat out
This is going to stdout
This is going to stderr

Here documents is a term used for redirecting multiple lines of standard input to a command.
cat >> profile <<END
> export 1
> export 2
> END

The ! character can be used with [ ] to reverse the match. In other
words, [!a] matches any character, except a.
$ ls [!d]*

This pattern matches anti, antic, antigen, and antique:
$ match anti*(c|gen|que)
+(pattern)   This format matches one or more occurrences of pattern.
@(pattern)  This format matches exactly one occurrence of pattern.
!(pattern)   This format matches anything except pattern.

File name substitution can be disabled by setting the noglob option using the set command:
$ set —o noglob
or
$ set —f

The format for command substitution is:
$(command)
$ echo The date is $(date)
The date is Fri Jul 27 10:41:21 PST 1996

For compatibility with the Bourne shell, the following format for command substitution can also be used:
`command `

$((arithmetic-expression))
$ echo $((8192*16384%23))

The Korn shell supports four data types: string, integer, float, and
array. If a data type is not explicitly defined, the Korn shell will
assume that the variable is a string.
By default, all variables are global in scope. However, it is possible to
declare a local variable within a function.

To assign a value and/or
attribute to a Korn shell variable, use the following format with the
typeset command:
typeset –attribute variable=value
or
typeset –attribute variable
typeset +attribute variable
remove attribute from variable
typeset —r ...       Once the readonly attribute is set, a variable cannot be assigned another value.
typeset —i NUM=1  The integer attribute (–i) is used to explicitly declare integer variables.
typeset —E5 X=123.456   $ print $X  123.46 The float command can also be used to declare a float variable, but does not allow for specifying the precision.

Multiple attributes can also be assigned to variables. This command sets the integer and autoexport attributes for TMOUT:
$ typeset —ix TMOUT=3000
list all the integer type variables and their values:
$ typeset -i
Variables can be assigned command output using this format:
variable=$(command)
or
variable=`command`

? exit status of the last command
$ process id of the current Korn shell
-– current options in effect
! process id of the last background command or co-process
ERRNO error number returned by most recently failed system call (system dependent)
PPID process id of the parent shell

${#variable}   This is expanded to the length of variable.
$ print ${X:-cde}  After X is unset, the alternate value cde is used:
$ X=A
$ X[1]=B
The original assignment to variable X is not lost. The first array element (X[0]) is still assigned A.
$ set —A DAY Mon Tue Wed Thu Fri Sat Sun
print ${DAY[3]} ${DAY[5]}
print ${DAY[*]}  or print ${DAY[@]}
print ${#DAY[*]}
To get values for a subset of an array, use this format:
${variable[*]:start_subscript:num_elements}
or
${variable[@]:start_subscript}
$typeset —u DAY
print ${DAY[*]}

Double quotes are like single quotes, except that they do not remove
the meaning of the special characters $, `, and \.
fc —l —5

Jobs running in the foreground are suspended by typing Ctl-z (Controlz)
and backgrounded jobs are brought back into the foreground with the fg command.  $ fg %3
The status and other information about all jobs is displayed using the jobs command.
The + indicates the current job,and – indicates the previous job:
$ jobs
[3] + Stopped split —5000 hugefile
[2] — Running find / —name core —print &
[1] Running sleep 25 &
The jobs -l command shows the same information, along with the
process ids, while jobs –-p only gives you the process ids.
$ sleep 100 &
[1] 254
$ kill %1
It could also be given as kill 254.
You can make the Korn shell wait for some or all background jobs to complete with the wait command. If no argument is given, the Korn
shell waits for all background jobs to complete.
Jobs being executed in the background are prevented from generating output by setting stty tostop. The only way to see the output is to bring the job back into the foreground.
The nohup command can also be used to direct output from
background jobs. It causes standard output and standard error to be
automatically sent to nohup.out, or whatever file you give it.
The nohup command will keep jobs running, even if you log out.
The Korn shell displays a warning message if you try to exit from the shell while jobs are stopped.

Integer arithmetic can be done with the let command and arithmetic expressions.
$ let "X=1 + 1"
The ((...)) command is equivalent to the let command, except that all characters between the (( and )) are treated as quoted arithmetic
expressions.
variables can be explicitly declared integer type by using the typeset –i command.
$typeset —i DAYS MONTHS=12

The [[...]] command is used to evaluate conditional expressions with file attributes, strings, integers, and more.
If you are familiar with the test and [...] commands, then you'll recognize that [[...]] is just a new and improved version of the same
commands. It basically functions the same way, except that a number of new operators are available.
$ X=abc
$ [[ $X = abc ]] && print "X is set to abc"
X is set to abc

This expression evaluates to true if there are less than or equal to three positional parameters set:
[[ $# —le 3 ]] && print "3 or less args given"

The ! operator negates the result of any [[...]] expression when used like this:
[[ ! expression ]]
For example, to check if X is not equal to abc:
$ X=xyz
$ [[ ! $X = abc ]] && print "$X not equals abc"
xyz not equals abc

$ print "This is output again" | read LINE
$ print $LINE
This is output again

$ cat kcat
IFS=:
exec 0<$1
while read LINE
do
print $LINE
done

The . command reads in a complete file, then executes the commands
in it as if they were typed in at the prompt. This is done in the current
shell, so any variable, alias, or function settings stay in effect. It is
typically used to read in and execute a profile, environment, alias, or
functions file.

Co-processes are commands that are terminated with a |& character.
They are executed in the background, but have their standard input
and output attached to the current shell. The print –p command is
used to write to the standard input of a co-process, while read –p is
used to read from the standard output of a co-process.

n<&p redirect input from co-process to file
descriptor n. If n is not specified, use
standard input.
n>&p redirect output of co-process to file
descriptor n. If n is not specified, use
standard output.
print –p write to the standard input of a coprocess
read –p read from the standard output of a coprocess

The times command displays the amount of time the current Korn
shell and child processes. The first line shows the total user and
system time (in hundredths of a second) for the current Korn shell,
while the second line shows the totals for the child processes.

The true command does one thing: return a zero exit status.

dtci-ndadabin01:speng $ let "X=$SECONDS*60"
dtci-ndadabin01:speng $ print $X
5360100

Java DB (Derby)

Java DB (Derby)
set the environment variables:

1.
 Set the DERBY_HOME environment variable to the location where you extracted the Derby bin distribution.
For example, if you installed Derby in the /opt/Derby_10 directory on UNIX or in the c:\Derby_10 directory on Windows, use the following command to set the DERBY_HOME environment variable:
Operating System
Command
UNIXexport DERBY_HOME=/opt/Derby_10
Windowsset DERBY_HOME=c:\Derby_10

2.
 Be certain that the java.exe file, version 1.4.2 or, higher is in your command execution PATH. Open a command window and run the java -version command.
3.
 Add the DERBY_HOME/bin directory to the PATH environment variable so that you can run the Derby scripts from any directory.
Operating System
Command
UNIXexport PATH="$DERBY_HOME/bin:$PATH"
Windowsset PATH=%DERBY_HOME%\bin;%PATH%. If you use the Control Panel to update your system PATH, add %DERBY_HOME%\bin to the end of the PATH environment variable

Tip: 



CONNECT 'jdbc:derby:firstdb;create=true';

Abinitio

how to find the rejected records after aborted.

##edit host setup
. /etc/profile
. ~abinitio/.profile                   
set_env feeds_out

$AI_LOG_FILE

where is the watcher dataset located? so that last watcher dataset could be saved

1.
dtci-ndadabin01:speng $ m_env -v
ab initio version 2.13.1 patch level 89 built on Solaris8-n32mt
2.sandbox parameter
Edit sandbox->Parameters, the directory should refer to $PROJECT_DIR
dtci-ndadabin01:speng $ echo $PROJECT_DIR
/home7/speng/abinitio/sand/nda/feeds_out_main
The sandbox marker file: view $PROJECT_DIR/.air-project-parameters. Do not edit the marker files. They are
maintained manually.
air sandbox parameter TESTSTONE VALUEOFSTONE
The sandbox parameter sometimes is always_visible, sometimes is cond_visible, why?
parameter: local implicit "NFI_AVG_OPF_OPEN_AUCTION" string "ORDER_BOOK_OPEN" "" optional always_visible "" dollar_substitu
tion export 0




Part 2
Ab Initio
1. Join Type
If record A is required, set its port to True.
    select A.field1 ... from A left outer B where A.field = B.field
For a 'left outer' join:
    record_required0:true
    record_required1:false
For a 'right outer' join:
    record_required0:flase
    record_required1:true
For a 'innerouter' join:
    record_required0:true
    record_required1:true
For a 'full outer' join:
    record_required0:flase
    record_required1:flase
2. Use m_eval to quickly test the expressions
Run > Execute Command..._
m_eval "string_filter_out('TesT1345 Value45','0123456789')"             or m_env describe AB_REPOR
3. Execute a graph
cd my_sandbox/run
./my_graph.ksh
or
air sandbox run my_sandbox/mp/my_graph.mp
4. Encrypt password
In DBC file, passwod can be encrypted.    F8
m_password -password student0
5.
Any component can run on any server. The location at which the component runs is called the layout.
6.
When a public project is used by another project, the used project is called a common project.
A private project contains metadata that is hidden from (and not needed by) other projects.
Two environment projects are included everywhere:
localenv    local machine environment
stdenv     Ab Initio Standard Environment
The standard Environment parameter AI_TEST_FLAG is used to control the location of your personal data area.
you need to set an override value in AI_TEST_FLAG_OVERRIDE before you begin to work.

You can not assume your DML record format definition matches the actual record structure of bad data. "Validate Records" component and is_valid() DML build_in function could be used.
The lowest priority is blank(the default); highest is 1, then 2, and so on.
\0001 is an unprintable character (possible delimiter)
Make key fields fixed if possible and if possible, keep fixed fields at the front of the record would be more efficient
The graph can also be executed using the following:
air sandbox run my_sandbox/mp/my_graph.mp

m_eval '[record x (date("YYYYMMDD"))0 s (string(5)) "xyz"]'

JMS supports these messaging models:

The TIBCO EMS software 5.1 on windows and Solaris
They should be different with TIBCO RENDEZVOUS

Ask a Question

Difference between Tibco Rendezvous and TIB EMS server
Reply from Rajiv Totlani on 9/3/2004 12:49 PM | Share

0
member
votes


EMS Server is a tool that acts as a JMS server where
JMS is the messaging protocol (and more as we will see
later in this note). RV is a messaging protocol from
TIBCO, but unlike JMS does not require a server.

EMS is also a gateway from JMS to RV (think of a
gateway as something where you go from one protocol to
another). The reason why Tibco calls it EMS has
nothing to do with "Tibco Product architecture tight
coupling". There is no tight coupling in messaging if
you do things right and use XML for data. The entire
idea behind messaging is to remove tight coupling. The
reason its called EMS or Enterprise Messaging Server
is because it does more than just JMS and calling it
just a JMS server does not do it justice.

One of the very good ideas behind JMS was to promote
inter-operability between different MOM's (Message
Oriented Middleware's). In order for this to be
possible, the MOM companies provide a tool that
converts from their message format to JMS and the
other MOM that it needs to interoperate with converts
from JMS to its format (JMS is the language they all
speak in addition to their languages). EMS is
confirming to this idea (and I have seen it being used
in this way...for high volume fast transactions use
reliable RV messages and where latency is acceptable
use JMS).

Rajiv Totlani



JMS supports these messaging models:
• Point-to-Point (queues)
• Publish and Subscribe (topics)
• Multicast (topics)
Point-to-Point
Point-to-point messaging has one producer and one consumer per message. This
style of messaging uses a queue to store messages until they are received. The
message producer sends the message to the queue; the message consumer
retrieves messages from the queue and sends acknowledgement that the message
was received.
Publish and Subscribe
In a publish and subscribe message system, producers address messages to a
topic. In this model, the producer is known as a publisher and the consumer is
known as a subscriber.
Many publishers can publish to the same topic, and a message from a single
publisher can be received by many subscribers. Subscribers subscribe to topics,
and all messages published to the topic are received by all subscribers to the topic.
This type of message protocol is also known as broadcast messaging because
messages are sent over the network and received by all interested subscribers,
similar to how radio or television signals are broadcast and received.
By default, subscribers only receive messages when they are active. If messages
arrive on the topic when the subscriber is not available, the subscriber does not
receive those messages.
The EMS APIs allow you to create durable subscribers to ensure that messages are
received, even if the message consumer is not currently running.
Multicast
Multicast messaging allows one message producer to send a message to multiple
subscribed consumers simultaneously. As in the publish and subscribe messaging
models, the message producer addresses the message to a topic. Instead of
delivering a copy of the message to each individual subscriber over TCP,
however, the EMS server broadcasts the message over Pragmatic General
Multicast (PGM). A daemon running on the machine with the subscribed EMS
client receives the multicast message and delivers it to the message consumer.
Multicast is highly scalable because of the reduction in bandwidth used to
broadcast messages, and because of reduced EMS server resources used.
However, multicast does not guarantee message delivery to all subscribers.
JMS messages have a standard structure. This structure includes the following
sections:
• Header (required)
• Properties (optional)
• Body (optional)
JMS_TIBCO_COMPRESS Allows messages to be
compressed for more efficient
storage.
The JMS specification includes a JMSPriority message header field in which
senders can set the priority of a message, as a value in the range [0,9]. EMS does
support message priority (though it is optional, and other vendors might not
implement it).
The EMS client APIs (Java, .NET and C) include mechanisms for handling strings
and specifying the character encoding used for all strings within a message.
TIBCO Enterprise Message Service allows a client to compress the body of a
message before sending the message to the server.
EMS supports message compression/decompression across client types (Java, C
and C#). For example, a Java producer may compress a message and a C
consumer may decompress the message.
To set message compression, the application that sends or publishes the message
must access the message properties and set the boolean property
JMS_TIBCO_COMPRESS to true before sending or publishing the message.
Compressed messages are handled transparently. The client code only sets the
JMS_TIBCO_COMPRESS property. The client does not need to take any other action.

The EMS APIs allow for both synchronous or asynchronous message
consumption. For synchronous consumption, the message consumer explicitly
invokes a receive call on the topic or queue. When synchronously receiving
messages, the consumer remains blocked until a message arrives. See Receiving
Messages on page 320 for details.
The consumer can receive messages asynchronously by registering a message
listener to receive the messages. When a message arrives at the destination, the
message listener delivers the message to the message consumer. The message
consumer is free to do other operations between messages. See Creating a
Message Listener for Asynchronous Message Consumption on page 313 for
details.
You can use wildcards when specifying statically created destinations in
queues.conf and topics.conf. The use of wildcards in destination names can
be used to define "parent" and "child" destination relationships, where the child
destinations inherit the properties and permissions from its parents.

VS2008

VS2008:

1. How to get which classes implement the interface?

 (Open Type Hierarchy in Eclipse)

2. Navigate backward/forward

 Shift+-     Ctrl+Shift+-  can not highlight the whole word (how to custom)(Alt+Left/Right in Eclipse)

3. How to select all text between the curly brackets

 (double click one of the brakcet in Eclipse)

4. Show Line Number

  (Tool-Options-Text Editor-All languages-Line number) (Right click in Eclipse)

Servelet笔记

Servelet笔记:
1. multithread
Only one instance of a particular servlet is created, and each request for that servlet passes through the same object. This strategy helps the container make
the best use of available resources. The tradeoff is that the servlet's doGet() and doPost() methods must be programmed in a thread-safe manner.
A Web container will typically create a thread to handle each request. If you want to ensure that a servlet instance handles only one request at a time, a servlet
can implement the SingleThreadModel interface. If a servlet implements this interface, you are guaranteed that no two threads will execute concurrently in the
servlet's service method. A Web container can implement this guarantee by synchronizing access to a single instance of the servlet, or by maintaining a pool of
Web component instances and dispatching each new request to a free instance. This interface does not prevent synchronization problems that result from Web
components accessing shared resources such as static class variables or external objects.
Resolution:
1. implements SingleThreadModel
2. synchronized the instance variable
3. Avoid using the instance variable
在Serlet中避免使用实例变量是保证Servlet线程安全的最佳选择。从Java 内存模型也可以知道,方法中的临时变量是在栈上分配空间,
而且每个线程都有自己私有的栈空间,所以它们不会影响线程的安全。
(实例变量是在堆中分配的,并被属于该实例的所有线程共享,所以不是线程安全的.)
1、栈区(stack)— 由编译器自动分配释放 ,存放函数的参数值,局部变量的值等。其操作方式类似于数据结构中的栈。
2、堆区(heap) — 一般由程序员分配释放, 若程序员不释放,程序结束时可能由OS回收 。注意它与数据结构中的堆是两回事,分配方式倒是类似于链表。
JSP中用到的OUT,REQUEST,RESPONSE,SESSION,CONFIG,PAGE,PAGECONXT是线程安全的,APPLICATION在整个系统内被使用,所以不是线程安全的.
静态函数只有在内部引用了静态变量的时候,才会出现多线程同步问题。   
 静态函数内部创建的变量,在每个线程调用时都会另行创建,不会共用一个存储单元。   
 但是对于类中的静态成员,在类加载后就占用一个存储区,每个线程访问到该静态成员时,   
 都是对那个公共的存储区操作。   
 下面这段代码对比一下:   
 public   class   a{   
     static   string   name1="";   
     string   name2="";   
     public   static   string   setname1(string   name){   
           this.name1=name;   
     }   
     public   static   string   setname2(string   name){   
           this.name2=name;   
     }   
 }   
 使用这2个静态方法时,setname1会出现多线程同步问题。setname2不会
 
一,servlet容器如何同时处理多个请求。

Servlet采用多线程来处理多个请求同时访问,Servelet容器维护了一个线程池来服务请求。
线程池实际上是等待执行代码的一组线程叫做工作者线程(Worker Thread),Servlet容器使用一个调度线程来管理工作者线程(Dispatcher Thread)。

当容器收到一个访问Servlet的请求,调度者线程从线程池中选出一个工作者线程,将请求传递给该线程,然后由该线程来执行Servlet的service方法。
当这个线程正在执行的时候,容器收到另外一个请求,调度者线程将从池中选出另外一个工作者线程来服务新的请求,容器并不关系这个请求是否访问的是同一个Servlet还是另外一个Servlet。
当容器同时收到对同一Servlet的多个请求,那这个Servlet的service方法将在多线程中并发的执行。

2.
The ServletContext interface [javax.servlet.ServletContext] defines a servlet's view of the web application within which the servlet is running. It is accessible in a servlet
via the getServletConfig() method, and in a JSP page as the application implicit variable. Servlet contexts provide several APIs that are very useful in building web
applications:
Access To Web Application Resources - A servlet can access static resource files within the web application using the getResource() and getResourceAsStream() methods.
Servlet Context Attributes - The context makes available a storage place for Java objects, identified by string-valued keys. These attributes are global to the
entire web application, and may be accessed by a servlet using the getAttribute(), getAttributeNames(), removeAttribute(), and setAttribute() methods. From a JSP page,
servlet context attributes are also known as "application scope beans".

3.
One of the key characteristics of HTTP is that it is stateless. In other words, there is nothing built in to HTTP that identifies a subsequent request from the
same user as being related to a previous request from that user. This makes building an application that wants to engage in a conversation with the user over several
requests to be somewhat difficult. To alleviate this difficulty, the servlet API provides a programmatic concept called a session, represented as an object that
implements the javax.servlet.http.HttpSession interface. The servlet container will use one of two techniques (cookies or URL rewriting) to ensure that the next
request from the same user will include the session id for this session, so that state information saved in the session can be associated with multiple requests.
This state information is stored in session attributes (in JSP, they are known as "session scope beans"). To avoid occupying resources indefinitely when a user
fails to complete an interaction, sessions have a configurable timeout interval. If the time gap between two requests exceeds this interval, the session will
be timed out, and all session attributes removed. You define a default session timeout in your web application deployment descriptor, and you can dynamically
change it for a particular session by calling the setMaxInactiveInterval() method. Unlike requests, you need to be concerned about thread safety on your session
attributes (the methods these beans provide, not the getAttribute() and setAttribute() methods of the session itself). It is surprisingly easy for there to be
multiple simultaneous requests from the same user, which will therefore access the same session. Another important consideration is that session attributes
occupy memory in your server in between requests. This can have an impact on the number of simultaneous users that your application can support. If your
application requirements include very large numbers of simultaneous users, you will likely want to minimize your use of session attributes, in an effort to
control the overall amount of memory required to support your application.

4.
The original MVC pattern is like a closed loop: The View talks to the Controller, which talks to the Model, which talks to the View.
But, a direct link between the Model and the View is not practical for web applications, and we modify the classic MVC arrangement so that it would look less like a
loop and more like a horseshoe with the controller in the middle.
In the MVC/Model 2 design pattern, application flow is mediated by a central Controller.
The Controller delegates requests - in our case, HTTP requests - to an appropriate handler. The handlers are tied to a Model, and each handler
acts as an adapter between the request and the Model. The Model represents, or encapsulates, an application's business logic or state. Control is usually then
forwarded back through the Controller to the appropriate View. The forwarding can be determined by consulting a set of mappings, usually loaded from a database or
configuration file. This provides a loose coupling between the View and Model, which can make applications significantly easier to create and maintain.

Velocity笔记

## or    #* ... *#
<input type="text" name="email" value="$!email"/>
\$email escape
$data.getRequest().getServerName()  ## is the same as  $data.Request.ServerName
Directives always begin with a #
#set( $primate = "monkey" )
#set( $monkey.Say = ["Not", $my, "fault"] ) ## ArrayList
#set( $monkey.Map = {"banana" : "good", "roast beef" : "bad"}) ## Map
you could access the first element above using $monkey.Say.get(0).   $monkey.Map.get("bannana") to return a String 'good'
#literal()      #end
<table>
#foreach( $customer in $customerList )
   <tr><td>$velocityCount</td><td>$customer.Name</td></tr>
#end
</table>
#foreach( $foo in [1..5] )
$foo
#end

As a programmer, the classes you should use to interact with the Velocity internals are the org.apache.velocity.app.Velocity
class if using the singleton model, or org.apache.velocity.app.VelocityEngine if using the non-singleton model ('separate instance').





check $ and #,