Copy URLs with netcat. Run it backgrounded.
1da4239e5ef07b19f601025a0e1b9848bf2517afca1fa2d3835bbeba38093f62
<title>Wcp: a simple HTTP URL copy</title>
<body bgcolor="ffffff">
<h4>wcp: a simple HTTP URL cp for UNIX</h4>
<hr>
More and more developers of applications/programs/other stuff on the
Internet are now beginning to distribute their files via HTTP only.
One drawback for people who do not have the fastest lines in the world,
is that you <i><b>cannot retrieve a document and have your proccess in
the background</b></i>.<p>
So I've built this simple but very helpful script named <i>wcp</i>
which simply copies the URL in question. It uses the <i>netcat</i>
program which is developed by:<a href="mailto: hobbit@avian.org">
*Hobbit* (hobbit@avian.org)</a>. <i>Netcat</i> is available from the
following URL:<a href ="ftp://ftp.avian.org/src/hacks/">
ftp://ftp.avian.org/src/hacks/</a>. If you
do not want to use <i>netcat</i> you may substitute <i>nc</i> with
<i>telnet</i>.<p>
<pre>
#!/bin/sh
# @(#)wcp.sh, copy web pages, adamo@dblab.ntua.gr
[ $# -ne 1 ] && {
echo Usage: wcp http://host:port/path/name
exit 1
}
proto=`echo $1 | cut -d: -f1`
host_port=`echo $1 | cut -d/ -f3`
host=`echo $host_port | cut -d: -f1`
port=`echo $host_port | cut -d: -f2`
pathname=`echo $1 | cut -d/ -f4-`
file=`echo $pathname | awk -F"/" '{print $NF}'`
exec 2>${file}.wcplog
exec 1>$file
echo GET /${pathname} | nc $host $port
exit $?
# end of file
</pre>
<a href="mailto: adamo@dblab.ntua.gr">-- adamo@dblab.ntua.gr -</a>.
</body>