[plug] Linux NFS client to Solaris HSM NFS server

Simon Scott simon.scott at flexiplan.com
Thu Jun 14 13:25:19 WST 2001


	I would guess that the solaris version of the nfs client has been
hacked to implement this 'blocking', and the linux version hasnt. Probably
you will find that there is no such thing as 'blocking' under NFS but they
had to implement it to avoid time-outs etc..

	I guess you could change the client yourself to grab that error,
print the message, and try again. Perhaps someone has had the same problem
and already has a patch?

	Is anyone here familiar with the nfs client source?







	From:	Ian Kent <ian.kent at pobox.com> on 14-06-2001 12:01 PM
	Please respond to plug at plug.linux.org.au@SMTP at Exchange
	To:	nfs at lists.sourceforge.net@SMTP at Exchange,
plug at plug.linux.org.au@SMTP at Exchange
	cc:	 

	Subject:	[plug] Linux NFS client to Solaris HSM NFS server


	Hi all,

	I have an NFS access problem with a Redhat 7.1 client using a 2.4.4
kernel
	with Tronds' NFS V3 patches, connecting to a Solaris HSM NFS server.
Mount
	is version 2.10r and nfs-utils is 0.3.1.

	On the HSM NFS server I using SAM-fs version 3.3.1 from LSC with
Solaris
	2.6.

	SAM-fs is a hierarchical storage management (HSM) system for
Solaris. It
	provides a virtual filesystem that resides on primary storage
(magnetic
	disk) and secondary storage (typically magnetic tape or optical
disk)

	When files are written into the HSM filesystem they are written
	immediately to the primary storage (disk) and then after a short
period of
	time to secondary storage. If the system reaches a point where it
starts
	to run out of space on the primary (cache) disk, it removes the data
	portion of the file from the cache leaving the copy on secondary
storage.
	The file is said to be 'offline'.

	When this happens, the directory entry for the file remains in
place. As
	such the file is still visible to 'ls', 'find' and all the usual
directory
	level tools. However, if an application tries to open an 'offline'
file,
	the open operation is blocked while the HSM system retieves the file
data
	from the secondary media and puts it back on the disk cache. The
	application is then passed the data.

	On Solaris NFS clients this process is seamless. The client blocks
until
	the file data is available and then continues with no further
interaction
	from the user/application required. In interactive sessions. a
message
	appears in the parent shell saying.

	"file temporarily unavailable on the server, retrying... "

	It is unclear whether this message is actually passed verbatim from
the
	server to the client or whether a response code from the server
triggers
	the client to generate the message for the user.

	My problem is that attempting to access a samfs file which is
offline (ie
	not present on disk, but archived to tape) the Linux client issues
the
	error:

	error:read failed: Input/output error (5)

	plus other errors relating to the command used.

	On a Solaris client the same request with block until the samfs file
comes
	online (ie brough in from tape) and then carry on.

	I have tried sync vs async and tcp vs udp mount options without any
change
	in error response.

	Looking at tcpdump output I get get the impression that the client
is
	receiving an unknown response and is returning the I/O error as a
last
	chance response.

	Can anyone help with this. I can provide tcpdump files if needed.

	Ian Kent





**********************************************************************
This email and any files transmitted with it are confidential and 
intended solely for the use of the individual or entity to whom they   
are addressed. If you have received this email in error please notify 
the system manager.

This footnote also confirms that this email message has been swept by 
MIMEsweeper for the presence of computer viruses.

www.mimesweeper.com
**********************************************************************



More information about the plug mailing list