Jul 22 2008

Feedback on Freeradius IP Pools

Published by under Freeradius

If you wonder if you should use rlm_ippool or rlm_sqlippool to turn your Radius into a “DHCP” server, read on!


rlm_ippool

We first configured Freeradius to provide IP addresses through the ippool module. IPs are stored internally in a binary data file.

Mylene2401 / Pixabay


radiusd.conf

ippool main_pool {
                range-start = 192.168.0.2
                range-stop = 192.168.0.254
                netmask = 255.255.255.0
                cache-size = 800
                session-db = ${raddbdir}/db.ippool
                ip-index = ${raddbdir}/db.ipindex
                override = yes
                maximum-timeout = 0

accounting {
        main_pool
}

post-auth {
        main_pool
}

 
Users

In users, we’ve got:

DEFAULT         Pool-Name := main_pool
                Fall-Through = Yes

On startup, db.ippool and db.ipindex are created in the configuration directory.

 
Test

lease-duration is set to 10 in sqlippool.conf for testing purposes. IPs should be released after 10 seconds.
 
# Let’s check the normal behaviour
echo “Connecting user test…”
echo “User-Name=\”test\”,User-Password=\”test\”,NAS-IP-Address=\”127.0.0.1\”,
NAS-Port=0″ | radclient localhost:1812 auth testing123
echo “User-Name=\”test\”,Acct-Session-Id=\”6000006B\”,Acct-Status-Type=\”Start\”,
NAS-IP-Address=\”127.0.0.1\”,NAS-Port=0″| radclient localhost:1813 acct testing123
# Checking number of IPs delivered – Should be 1
rlm_ippool_tool -c etc/raddb/db.ippool etc/raddb/db.ipindex
=> 1
 
echo “Disconnecting user test”
echo “User-Name=\”test\”,Acct-Session-Id=\”6000006B\”,Acct-Status-Type=\”Stop\”,
NAS-IP-Address=\”127.0.0.1\”,NAS-Port=0″| radclient localhost:1813 acct testing123
# Checking number of IPs delivered – Should be 0
rlm_ippool_tool -c etc/raddb/db.ippool etc/raddb/db.ipindex
=> 0 – Good!
 
# Let’s check the lease timeout
echo “Connecting user test…”
echo “User-Name=\”test\”,User-Password=\”test\”,NAS-IP-Address=\”127.0.0.1\”,
NAS-Port=0″ | radclient localhost:1812 auth testing123
echo “User-Name=\”test\”,Acct-Session-Id=\”6000006B\”,Acct-Status-Type=\”Start\”,
NAS-IP-Address=\”127.0.0.1\”,NAS-Port=0″| radclient localhost:1813 acct testing123
rlm_ippool_tool -c etc/raddb/db.ippool etc/raddb/db.ipindex
=> 1
# We wait till the lease times out
sleep 11
rlm_ippool_tool -c etc/raddb/db.ippool etc/raddb/db.ipindex
=> 1
The timeout isn’t working!


rlm_sqlippool

 
radiusd.conf

Upgrade first to Freeradius 1.1.7 or later and make the following changes to radiusd.conf:
Uncomment “$INCLUDE  ${confdir}/sqlippool.conf”, remove main_pool and add sqlippool in the accounting and post-auth sections.

accounting {
        sqlippool
}

post-auth {
        sqlippool
}

 
users

DEFAULT         Pool-Name := main_pool
                Fall-Through = Yes

 
SQL IP Pool Creation

Add the radippool table structure in the Mysql database if necessary (included in FR):

#
# Table structure for table 'radippool'
#
CREATE TABLE radippool (
  id                    int(11) unsigned NOT NULL auto_increment,
  pool_name             varchar(30) NOT NULL,
  FramedIPAddress       varchar(15) NOT NULL default '',
  NASIPAddress          varchar(15) NOT NULL default '',
  CalledStationId       VARCHAR(30) NOT NULL,
  CallingStationID      VARCHAR(30) NOT NULL,
  expiry_time           DATETIME NOT NULL default '0000-00-00 00:00:00',
  username              varchar(64) NOT NULL default '',
  pool_key              varchar(30) NOT NULL,
  PRIMARY KEY (id)
);

and add the file sqlippool.conf (provided in 1.1.7 and later)
 
Add the IP pool in the base

INSERT INTO radippool (pool_name, framedipaddress) VALUES ('main_pool','192.168.0.1');
INSERT INTO radippool (pool_name, framedipaddress) VALUES ('main_pool','192.168.0.2');
[...]


Results

Doing the same tests with the SQL IP pool configuration gives correct results. IPs are released after 10 seconds.

rlm_ippool catches more and more IPs and the pool fills up. In the end, you need to reset the pool and the customers’ connections, meaning downtime!
 
On the other hand, SQLippool is interesting if you have several Radius servers serving the same customers. IP pools are managed on the database side, which is convenient.

 

7 responses so far

Apr 19 2008

Xen Tips

Published by under Linux,Virtualization

Disabling Image Files




Xen usually stores running domains in /var/lib/xen/save, causing a /var full on many systems when dom0 is shut down. Edit /etc/sysconfig/xendomains or /etc/default/xendomains and replace

"XENDOMAINS_SAVE=/var/lib/xen/save"

with

"XENDOMAINS_SAVE="

This sends a shutdown signal to all virtual machines before dom0 reboots. A 300 second timeout is defined before killing an OS that hangs on shutdown.

Static Routes

The file /etc/sysconfig/static-routes is still supported but routes are removed after the Xen daemon is launched. Since Redhat Enterprise Linux 3, it is advised to use /etc/sysconfig/network-scripts/route-ethX. It should include similar information to this for instance:

GATEWAY0=10.25.207.163
NETMASK0=255.255.252.0
ADDRESS0=10.22.40.0

GATEWAY1=10.25.207.163
NETMASK1=255.255.252.0
ADDRESS1=10.22.208.0

Virtual Console Access

2 methods can be used to access a domU console:

  • “xm console xenhost” If the virtual host is shut down, “xm create -c xenhost” boots it and launches the console. However, user space messages are not displayed – like services startup.
  • The 2nd option is to launch vncviewer that emulates a colored console.
    Every single Xen host has its own VNC port on the physical machine. They are usually on the 590X format. You can find the ports with netstat.
    Open a X11 manager on your local system and run on dom0:

    export DISPLAY=10.22.41.229:0.0
    vncviewer localhost:590X
    

Keyboard in console

Before Redhat Enterprise 5.1, the keyboard always uses the qwerty layout in its minimal definition. Upgrade to RHEL5.1 to access all characters and specify a custom layout.
Add keyword “keymap” to the Xen configuration file. Example:

vfb = [ "type=vnc,keymap=fr" ]

Serial Console

To get the serial console to work with a Xen kernel, Grub had to be configured this way:

title Systeme principal
       root (hd0,0)
       kernel /xen.gz-2.6.18-53.el5 com2=115200,8n1 pnpacpi=off console=com2L
       module /vmlinuz-2.6.18-53.el5xen ro root=/dev/sys1liege/rootfs console=xvc xencons=xvc
       module /initrd-2.6.18-53.el5xen.img

The following line is also needed in /etc/inittab

7:2345:respawn:/sbin/agetty -L 115200 ttyS1 vt102 or
co:2345:respawn:/sbin/agetty xvc0 115200 vt100-nav

Adding “xvc0” to /etc/securetty may be needed.

NTP

NTP doesn’t need to be set up on virtual machines. Time is synchronized with the hosting machine.

 

No responses yet

Feb 25 2008

Avoid Reboot after Partition Change with Fdisk

Published by under Linux

When modifying the partition table, fdisk usually returns “Device or resource busy” error messages such as:

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
WARNING: re-reading the partition table failed.: device or resource busy


Partprobe from the “parted” package helps fixing this issue, avoiding a useless reboot. From the man page:
“partprobe is a program that informs the operating system kernel of partition table changes, by requesting that the operating system re-read the partition table.”

device or resource busy fdisk
manseok_Kim / Pixabay


Device Busy and Fdisk

Let’s add a new partition on the Linux server with fdisk. /dev/cciss/c0d0 could of course be /dev/sda or anything else.

[root@linux ~]$ fdisk /dev/cciss/c0d0
The number of cylinders for this disk is set to 8854.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/cciss/c0d0: 72.8 GB, 72833679360 bytes
255 heads, 63 sectors/track, 8854 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

           Device Boot      Start         End      Blocks   Id  System
/dev/cciss/c0d0p1   *           1          16      128488+  83  Linux
/dev/cciss/c0d0p2              17        1060     8385930   8e  Linux LVM
/dev/cciss/c0d0p3            1061        2104     8385930   8e  Linux LVM
/dev/cciss/c0d0p4            2105        8854    54219375    5  Extended
/dev/cciss/c0d0p5            2105        5144    24418768+  8e  Linux LVM

Command (m for help): n
First cylinder (5145-8854, default 5145):
Using default value 5145
Last cylinder or +size or +sizeM or +sizeK (5145-8854, default 8854): +1000M

Command (m for help):w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.


The new partition here is not not visible on the system. fdisk -l would show the same thing.
I create LVM partitions but the message would be similar with ext3 or ext4.

[root@linux ~]$ ls /dev/cciss/
c0d0  c0d0p1  c0d0p2  c0d0p3  c0d0p4  c0d0p5


Running partprobe reloads the partition table and brings the partition up:

[root@linux ~]$ partprobe
[root@linux ~]$ ls /dev/cciss/
c0d0  c0d0p1  c0d0p2  c0d0p3  c0d0p4  c0d0p5  c0d0p6


It is now possible to format and mount the partition without rebooting the server.

 

No responses yet

Jan 20 2008

Heartbeat 2 Howto

Published by under Linux




Important note:
Heartbeat is now obsolete and has moved to a new stack available on Clusterlabs. For simple high availability project using a virtual IP, try out keepalived that does monitoring and failover with just a simple configuration file.

Since version 2, Heartbeat is able to manage more than 2 nodes, and doesn’t need “mon” utility to monitor services.
This fonctionnality is now implemented within Heartbeat.
As a consequence, this flexibility and new features may make it harder to configure.
It’s intresting to know version 1 configuration files are still supported.

Installation

Heartbeat source files are available from the official site http://www.linux-ha.org. Redhat Enterprise compatible rpms can be downloaded from the Centos website (link given from Heartbeat website). Packages are built pretty quickly as source version is 2.1.0 when rpm is 2.0.8-2 at the time this article was made. 3 rpms are needed:

heartbeat-pils
heartbeat-stonith

heartbeat

We are going to monitor Apache but the configuration remains valid for any other service: mail, database, DNS, DHCP, file server, etc…

Diagrams

Failover or load-balancing

Heartbeat supports Active-Passive for the failover mode and Active-Active for the load-balancing.
Many other options are possible adding servers and/or services; There are many many possibilities.

We will focus on load-balancing, only a few lines need to be removed for failover.

Failover Load-balancing

Configuration

In this setup, the 2 servers are interconnected through their eth0 interface.
Applicative flows come on eth1, that is configured as in the diagram below, with addresses 192.168.0.4 and .5
Addresses on eth0 have to be in a sub-network dedicated to Heartbeat.
These addresses must appear in /etc/hosts on every node.

3 files must be configured for Heartbeat 2. Again, they are identical on each node.

In /etc/ha.d/

  • ha.cf
  • authkeys

In /var/lib/heartbeat/crm/

  • cib.xml

ha.cf

ha.cf contains the main settings like cluster nodes or the communication topology.

use_logd on
# Heartbeat packets sending frequency
keepalive 500ms # seconds - Specify ms for times shorter than 1 second

# Period of time after which a node is declared "dead"
deadtime 2
# Send a warning message
# Important to adjust the deadtime value
warntime 1
# Identical to deadtime but when initializing
initdead 8
updport 694

# Host to be tested to check the node is still online
# The default gateway most likely
ping 192.168.0.1
# Interface used for Heartbeat packets
# Use the serial port,

# "mcast" and "ucast" for multicast and unicast
bcast eth0
# Resource switches back on the primary node when it's available
auto_failback on
# Cluster Nodes List
node n1.domain.com
node n2.domain.com
# Activate Heartbeat 2 Configuration

crm yes
# Allow to add dynamically a new node to the cluster
autojoin any

Other options are available such as compression or bandwidth for communication on a serial cable. Check http://linux-ha.org/ha.cf

authkeys

Authkeys defines authentication keys.
Several types are available: crc, md5 and sha1. crc is to be used on a secured sub-network (vlan isolation or cross-over cable.
sha1 offers a higher level of secuirity but can consume a lot CPU resources.
md5 sits in between. It’s not a bad choice.

auth 1
1 md5 secret

Password is stored in clear text, it is important to change the file permissions to 600.

cib.xml

<cib>
<configuration>

  <crm_config/>
  <nodes/>
  <resources>
    <group id="server1">

      <primitive class="ocf" id="IP1" provider="heartbeat" type="IPaddr">
        <operations>
          <op id="IP1_mon" interval="10s" name="monitor" timeout="5s"/>
        </operations>
        <instance_attributes id="IP1_inst_attr">
          <attributes>
            <nvpair id="IP1_attr_0" name="ip" value="192.168.0.2"/>
            <nvpair id="IP1_attr_1" name="netmask" value="24"/>
            <nvpair id="IP1_attr_2" name="nic" value="eth1"/>
          </attributes>
        </instance_attributes>
      </primitive>

      <primitive class="lsb" id="apache1" provider="heartbeat" type="apache">
        <operations>
          <op id="jboss1_mon" interval="30s" name="monitor" timeout="20s"/>
        </operations>
      </primitive>
    </group>

    <group id="server2">

      <primitive class="ocf" id="IP2" provider="heartbeat" type="IPaddr">
        <operations>
          <op id="IP2_mon" interval="10s" name="monitor" timeout="5s"/>
        </operations>
        <instance_attributes id="IP2_inst_attr">
          <attributes>
            <nvpair id="IP2_attr_0" name="ip" value="192.168.0.3"/>
            <nvpair id="IP2_attr_1" name="netmask" value="24"/>
            <nvpair id="IP2_attr_2" name="nic" value="eth1"/>
          </attributes>
        </instance_attributes>
      </primitive>

      <primitive class="lsb" id="apache2" provider="heartbeat" type="apache">
        <operations>
          <op id="jboss2_mon" interval="30s" name="monitor" timeout="20s"/>
        </operations>
      </primitive>
    </group>
  </resources>

  <constraints>
    <rsc_location id="location_server1" rsc="server1">
      <rule id="best_location_server1" score="100">
        <expression_attribute="#uname" id="best_location_server1_expr" operation="eq"
        value="n1.domain.com"/>
      </rule>
    </rsc_location>

    <rsc_location id="location_server2" rsc="server2">
      <rule id="best_location_server2" score="100">
        <expression_attribute="#uname" id="best_location_server2_expr" operation="eq"
        value="n2.domain.com"/>
      </rule>
    </rsc_location>

    <rsc_location id="server1_connected" rsc="server1">
      <rule id="server1_connected_rule" score="-INFINITY" boolean_op="or">
        <expression id="server1_connected_undefined" attribute="pingd"
        operation="not_defined"/>
        <expression id="server1_connected_zero" attribute="pingd" operation="lte"
        value="0"/>
      </rule>
    </rsc_location>

    <rsc_location id="server2_connected" rsc="server2">
      <rule id="server2_connected_rule" score="-INFINITY" boolean_op="or">
        <expression id="server2_connected_undefined" attribute="pingd"
        operation="not_defined"/>
        <expression id="server2_connected_zero" attribute="pingd" operation="lte"
        value="0"/>
      </rule>
    </rsc_location>
  </constraints>

</configuration>
</cib>

It’s possible to generate the file from a Heartbeat 1 configuration file, haresources located in /etc/ha.d/, with the following command:
python /usr/lib/heartbeat/haresources2cib.py > /var/lib/heartbeat/crm/cib.xml

The file can be split in 2 parts: resources and constraints.

Resources

Resources are organized in groups (server1 & 2) putting together a virtual IP address and a service: Apache.
Resources are declared with the <primitive> syntax within the group.
Groups are useful to gather several resources under the same constraints.

The IP1 primitive checks virtual IP 1 is reachable.
It executes OCF type IPaddr script.
OCF scripts are provided with Heartbeat in the rpm packages.
It’s also possible to specify the virtual address, the network mask as well as the interface.
Apache resource type is LSB, meaning it makes a call to a startup script located in the usual /etc/init.d.
The script’s name in the variable type: type=”name”.
In order to run with Heartbeat, the script must be LSB compliant. LSB compliant means the script must:

  • return its status with “script status”
  • not fail “script start” on a service that is already running
  • not fail stopping a service already stopped

All LSB specifications can be checked at http://www.linux- ha.org/LSBResourceAgent

The following time values can be defined:

  • interval: defines how often the resource’s status is checked
  • timeout: defines the time period before considering a start, stop or status action failed

Constraints

2 constraints apply to each group of resources:
The favourite location where resources “should” run.
We give a score of 100 to n1.domain.com for the 1st group.
Hence, if n1.domain.com is active, and option auto_failback is set to “on”, resources in this group will always come back there.

Action depending on ping result. If none of the gateways answer ping packets, resources move on another server, and the node goes to standby status.

Score -INFINITY means the the node will never ever accept these resources if the gateways are unreachable.

Important Notes

Files Rights

/etc/ha.cf directory contains sensible data. Rights have to be changed to be accessed by the owner only – or the application will not launch.

chmod 600 /etc/ha.d/ha.cf

The /var/lib/heartbeat/crm/cib.crm file has to belong to hacluster and group haclient. It must not be accessible by other users.
chown hacluster:haclient /var/lib/heartbeat/crm/cib.crm
chmod 660 /var/lib/heartbeat/crm/cib.crm

cib.xml is accessed in write mode by Heartbeat. Some files are created at the same time.
If you need to edit it manually, stop Heartbeat on all nodes, remove cib.xml.sig in the same directory, edit cib.xml on all nodes, and restart Heartbeat.

It’s advised to use crm_resource to bring modifications (see section below).

hosts files

The /etc/hosts file must contain all the cluster nodes hostnames. These names should be identical to the result of ‘uname -n’ command.

Services Startup

Heartbeat starts Apache when the service is inactive. It is better to disable Apache automatic startup with chkconfig for instance, as it may take a while to boot, Heartbeat could start it a second time.

chkconfig httpd off

There is a startup script in /etc/init.d/. To launch Heartbeat, run “/etc/init.d/hearbeat start”. Yu can launch Heartbeat automatically at boot time:
chkconfig heartbeat on

Behaviour and Tests

The following actions simulate incidents that may occur at some stage, and alter the cluster’s health. We study Heartbeat’s behaviour in each case.

Apache shutdown

Local Heartbeat will restart Apache. If the reboot fails, a warning is sent to the logs. Heartbeat won’t launch the service anymore then. The virtual IP address remains on the node. However, the address can be moved manually with Heartbeat tools.

Server’s Shutdown or Crash

Virtual addresses move to other servers.

Heartbeat’s Manual Shutdown on one Node

Heartbeat stops Heartbeat. Virtual addresses are moved to other nodes. The normal procedure is to run crm_standby to turn the node into standby mide and migrate resources accross.

Node to node cable disconnection

Each machine thinks it’s on its own and takes the 2 virtual IPs. However, it is not really a problem as the gateway keeps on sending packets to the last entry in the ARP table. A ping on the broadcast address sends duplicates.

Gateway disconnection

This could happen in 2 cases:
– The gateway is unreachable. In this case, the 2 nodes remove their virtual IP address and stop Apache.
– The connection to one of the nodes is down. Addresses move to the other server which can normally reach the gateway.

All simulations allow to maintain the cluster up except if the gateway is gone. But this is not a cluster problem…

Tools

Heartbeat’s health can be checked via different ways.

Logs

Heartbeat sends messages to the logd daemon that stores them in the /var/log/messages system file.

Unix Tools

Usual Unix commands can be used. Heartbeat creates sub-interfaces that you can check with “ifconfig” or “ip address show”. Processes status can be displayed with the startup scripts or with the “ps” command for instance.

Heartbeat Commands

Heartbeat is provided with a set of commands. Here are the main ones:

  • crmadmin: Controls nodes managers on each machine of the cluster.
  • crm_mon: Quick and useful; Displays nodes status and resources.
  • crm_resource: Makes requests and modifies resources/services related data. Ability to list, migrate, disable or delete resources.
  • crm_verify: Reports the cluster’s warnings and errors
  • crm_standby: Migrate all node’s resources. Useful when upgrading.

Maintenance

Service Shutdown

When upgrading or shutting Apache down, you may proceed as follow:

  • Stop Apache on node 1 and migrate the virtual address on node 2:
    crm_standby -U n1.domain.com -v true
  • Start upgrading n1 and restart the node:
    crm_standby -U n1.domain.com -v false
  • Resources automatically fail back to n1 (if cib.xml is properly configured)

Proceed the same way on n2

Rem: The previous commands can be run from any node of the cluster.

It is possible to switch the 2 nodes on standby at the same time; Apache will be stopped on the 2 machines and the virtual addresses deleted until it comes back to running mode.

Machine Reboot

It is not required to switch the node to standby mode before rebooting. However, it is better practice as resources migrate faster, detection time being inexistant.

Resource Failure

Example: Apache crashed and doesn’t restart anymore, all group resources are moved to the 2nd node.

When the issue is resolved and the resource is up again, the cluster’s health can be checked with:
crm_resource –reprobe (-P)

and the resource restarted:
crm_resource –cleanup –resource apache1 (ou crm_resource -C -r apache1).
It will move automatically back to the original server.

Adding a New Node to the cluster

In Standby

If the cluster contains 2 nodes connected with a cross-over cable, you then will need a switch for the heartbeats network interfaces.

You first need to add new node’s informations. Edit the /etc/hosts files on the current nodes and add the node’s hostname. /etc/hosts contents must be copied accross to the new node. Configure Heartbeat in the same way as other nodes.
ha.cf files must contain the “autojoin any” setting to accept new nodes on the fly.

On the new host, start Heartbeat; The should join the cluster automatically.

If not, run the following on one of the 1st nodes:
/usr/lib/heartbeat/hb_addnode n3.domain.com
The new node only acts as a failover, no service is associated. If n1.domain.com goes in standby, resources will move to n3. They will come back to the original server as soon as it comes back up (n1 being the favourite server).

With a New Service

To add a 3rd IP (along with a 3rd Apache), you need to follow the above procedure and then:
– Either stop Heartbeat on the 3 servers and edit cib.xml files
– Or build similar files to cib.xml, containing new resources and constraints to be added, and add them to the cluster in live. This is the preferred method. Create the following files on one of the nodes:

newGroup.xml

<group id="server3">
  <primitive class="ocf" provider="heartbeat" type="IPaddr" id="IP3">
    <operations>
      <op id="IP3_mon" interval="5s" name="monitor" timeout="2s"/>
    </operations>
    <instance_attributes id="IP3_attr">
      <attributes>
        <nvpair id="IP3_attr_0" name="ip" value="192.168.0.28"/>
        <nvpair id="IP3_attr_1" name="netmask" value="29"/>
        <nvpair id="IP3_attr_2" name="nic" value="eth1"/>
      </attributes>
    </instance_attributes>
  </primitive>

  <primitive class="lsb" provider="heartbeat" type="apache" id="apache3">
    <operations>
      <op id="apache3_mon" interval="5s" name="monitor" timeout="5s"/>
    </operations>
  </primitive>
</group>

newLocationConstraint.xml

<rsc_location id="location_server3" rsc="server3">
  <rule id="best_location_server3" score="100">
    <expression attribute="#uname" id="best_location_server3_expr" operation="eq"
    value="n3.domain.com"/>
  </rule>
</rsc_location>

newPingConstraint.xml

<rsc_location id="server3_connected" rsc="server3">
  <rule id="server3_connected_rule" score="-INFINITY" boolean_op="or">
    <expression id="server3_connected_undefined" attribute="pingd" 
    operation="not_defined"/>
    <expression id="server3_connected_zero" attribute="pingd" operation="lte" 
    value="0"/>
  </rule>
</rsc_location>

Add n3 constraints
cibadmin -C -o constraints -x newLocationConstraint.xml
cibadmin -C -o constraints -x newPingConstraint.xml

Add n3 resources

cibadmin -C -o resources -x newGroup.xml

n3 resources should start right away.

Note: Adding constraints can be done one at a time. If you try to add 2 constraints from within the same file, only the first will be set.

 

23 responses so far

Oct 15 2007

Atheros Wireless Interface on Linux Redhat

Published by under Linux




Got an old PC or server you want to leave running in a remote place? Don’t want to bother with cables?
Add a wireless card in, and configure the network service so it (re)connects automatically when the access point (re)boots.
We’ve chosen a Netgear because it integrates an Atheros based chipset. They’re well supported under Linux.

 

Card detection

Once the card is installed and the system rebooted, check the card has been detected with the command lspci.
You should get the following line in the result set:

# lspci
...
00:0d.0 Ethernet controller: Atheros Communications, Inc. AR5212 802.11abg NIC (rev 01)
...

 

Driver Installation

Unfortunately, most Linux distributions do not provide any driver for Atheros chipset-based wireless cards, out of the box.
Madwifi.org have developped and provide a standard Atheros driver for Linux.
To install madwifi, you will need the kernel development and header packages. On Redhat, install these rpms

  • kernel-header and
  • kernel-devel

Download madwifi, untar and compile it:

tar xfz madwifi-0.9.4.tar.gz
cd madwifi-0.9.4/
make
make install

 
Be aware the module has been installed in your kernel modules directory /lib/modules/`uname -r`/net meaning, if you upgrade your kernel, you’ll have to reinstall appropriate modules (kernel header and devel) and recompile madwifi.
 
Make the kernel module available:

modprobe ath_pci

 

Configuration

The last thing to do is to configure the wireless network interface.
On Redhat-like systems, create a file named ifcfg-ath0 in the same way you would for a classic network interface,
in /etc/sysconfig/network-scripts/. Extra parameters need to be added for wireless values.

# cat /etc/sysconfig/network-scripts/ifcfg_ath0
DEVICE=ath0
BOOTPROTO=static
BROADCAST=192.168.1.255
IPADDR=192.168.1.2
NETMASK=255.255.255.0
NETWORK=192.168.1.0
ONBOOT=yes
type=WIRELESS
ESSID=YourESSID
MODE=managed
KEY=F...A
CHANNEL=6
RATE=Auto
IWPRIV="authmode 2"
GATEWAY=192.168.1.1

 
From now on, your wireless interface will be managed by the “network” service.
The card is activated after each reboot;
You can turn off and on your access point, the card will reconnect with no manual intervention.
You can also use the iw commands such as iwlist and iwconfig to get more features out of it.
This configuration file only works for WEP but WPA which is more secure.

 

No responses yet

« Prev - Next »