WCS Server configurationThe diagram below is a typical WCS deployment topology, the terminology used by WCS admin's can become overwhelming and fuzzy at times for folks who are new to WCS, this digram can help you visualize how a typical production setup is and also what various terms mean from a WCS context
- WCS Instance or multiple JVM instances can co-exist with in a physical box and share a common deployment directory or the WC EAR file, hence any configuration changes to wc-server.xml impact all WC JVM instances in a box/ or a node.
- Node refers to any physical box or something with an IP, a Node is a combination of one or more WCS servers or JVM's
- Node agent resides within every Node, it is a special non WAS process and should be up and running for managing /synchronization of code from DMGR to cluster nodes, WCS JVM's are functional and serve traffic even if Node agent is down, but they can not be administered from a DMGR in case node agent is down.
- Deployment manager has a master copy of the WC EAR file, during deployment this is pushed from DMGR to various nodes using the node agents running within each node. DMGR is another WAS instance which is used to manage the federate nodes by a WCS admin and Admin console hosted on DMGR is used for various side admin activities like deployment, WCS JVM stop/start, cluster level datasource configuration etc..
- IHS Plugin file can be generated from DMGR WAS admin console (plugin-cfg.xml), this file consists of all the WCS servers / JVM's, their hostnames in a cluster which can serve traffic coming out of the Webserver, The plugin file is copied on to the cluster of IHS or webserver box, IHS uses plugin-cfg.xml file definition to route external traffic coming from webserver to WCS servers.
IBM Http Server configuration
Similar to a WCS farm which is managed from a DMGR, a typical webserver installation also has a farm of webserver servers managed from a WAS admin console
- httpd.conf file is the webserver configuration which is being used to start the server, this configuration has an entry which links together the plugin-cfg.xml file which was generated from WCS DMGR with the IHS server
- WebSpherePluginConfig <Absolute_Path_TO_Plugin>/plugin-cfg.xml
- plugin-cfg.xml is used for routing traffic and load balancing traffic from webserver to WCS app servers, an external hardware or software load balancer forwards the requests to the IHS server which then makes use of plugin-cfg.xml which was generated from WCS DMGR to determine the WCS server that would serve the request.
The updated plugin file si automatically reloaded and does not require a webserver restart, this is based on the RefreshInterval setting in the plug-in log file, by default it reloads the file every 60 seconds.
- Their are several load balancing rule that can be used by webserver before picking up a WCS instance for serving traffic, their is an excellent tech. note which talks of these configurations http://www-01.ibm.com/support/docview.wss?uid=swg21219567
- http_plugin.log can be used to trace the handshake between webserver and WCS instance which is handling the user traffic, by default tracing is disabled and is a suggested setting considering it generates whole lot of logging and may impact performance under high traffic, to debug issues you need to enable trace by editing following line in plugin-cfg.xml.
- <Log LogLevel="Trace" Name="/pathto/logs/http_plugin.log"/>
- Refer following link for more details on plugin routing logic http://www-01.ibm.com/support/docview.wss?uid=swg21219808