This guide will hopefully help you get AirPrint working on older HP LaserJet devices that do not natively support it.
On your Debian 10 server, install avahi-daemon (Bonjour) and CUPS (print queue) servers:
$ sudo apt install avahi-daemon cups
Add printer queue, replacing your.printer.hostname with host or IP of your printer, and set the description (-D) to whatever you want:
$ sudo lpadmin -p hp -D "HP LaserJet M1522nf MFP" -E -m drv:///sample.drv/laserjet.ppd -v socket://your.printer.hostname
Check if the sharing capability is enabled within CUPS and if not, enable it:
$ sudo cupsctl | grep share
_share_printers=0
$ sudo cupsctl --share-printers
Also enable sharing for the queue itself:
$ sudo lpadmin -p hp -o printer-is-shared=true
You can check the default settings for the printer using lpoptions. The defaults are displayed with an asterisk next to them:
$ lpoptions -d hp -l
PageSize/Media Size: *Letter Legal Executive Tabloid A3 A4 A5 B5 EnvISOB5 Env10 EnvC5 EnvDL EnvMonarch
Resolution/Resolution: 150dpi *300dpi 600dpi
InputSlot/Media Source: *Default Tray1 Tray2 Tray3 Tray4 Manual Envelope
Duplex/2-Sided Printing: *None DuplexNoTumble DuplexTumble
Option1/Duplexer: *False True
Here I have changed the paper size and default resolution systemwide:
$ sudo lpadmin -p hp -o PageSize=A4
$ sudo lpadmin -p hp -o Resolution=600dpi
You should now be able to see the mDNS entries using avahi-browse (in the avahi-utils package):
$ avahi-browse -at | grep -i hp
+ eth0 IPv6 HP LaserJet M1522nf MFP @ server UNIX Printer local
+ eth0 IPv4 HP LaserJet M1522nf MFP @ server UNIX Printer local
+ eth0 IPv6 HP LaserJet M1522nf MFP @ server Secure Internet Printer local
+ eth0 IPv4 HP LaserJet M1522nf MFP @ server Secure Internet Printer local
+ eth0 IPv6 HP LaserJet M1522nf MFP @ server Internet Printer local
+ eth0 IPv4 HP LaserJet M1522nf MFP @ server Internet Printer local
For fairly minimal effort, this setup seems to work quite well. Although printing is done via AppSocket/JetDirect, CUPS is smart enough to talk to the printer via SNMP to find out the printer status such as low toner or any errors. If it isn't already obvious, the Debian server will need to be on for the AirPrint function to work!
Showing posts with label Tech. Show all posts
Showing posts with label Tech. Show all posts
Monday, 7 October 2019
Wednesday, 28 November 2018
Hardening Samba
This post details how you can set up your Samba server to be a bit more resilient than the defaults.
The Samba server security page gives information on using the hosts allow/deny directives, interface binding configuration, and keeping up-to-date, so I'm not going to mention those things here.
I am however going to jump into a few other directives.
First of all, there's no good reason to give out the server's version, so my server replies with "Samba".
I'm using the "standalone server" server role, so I can disable NetBIOS completely, and without NetBIOS and SMB1 there's no need to listen on anything other than TCP/445.
Here are smb.conf server directives to get you started with those changes:
[global]
server string = Samba
disable netbios = Yes
server min protocol = SMB2
smb ports = 445
server signing = required
In addition to the above, you should consider disabling anonymous authentication.
With anonymous authentication enabled (the default), anyone can specify a blank user and password to view shares and other information, and talk to IPC$:
user@client:~$ smbclient -m SMB2 -L server -U ''
Enter 's password:
Sharename Type Comment
--------- ---- -------
share Disk
IPC$ IPC IPC Service (Samba)
Connection to server failed (Error NT_STATUS_CONNECTION_REFUSED)
NetBIOS over TCP disabled -- no workgroup available
The Samba server security page gives information on using the hosts allow/deny directives, interface binding configuration, and keeping up-to-date, so I'm not going to mention those things here.
I am however going to jump into a few other directives.
First of all, there's no good reason to give out the server's version, so my server replies with "Samba".
I mandate SMB2 as the minimum required protocol, and enforce signing. I really recommend you do this and so does Microsoft. Without mandating signing you are leaving yourself open to man-in-the-middle attacks. These settings will work with clients on Windows 7 and newer, and any non-ancient Linux/macOS.
I'm using the "standalone server" server role, so I can disable NetBIOS completely, and without NetBIOS and SMB1 there's no need to listen on anything other than TCP/445.
Here are smb.conf server directives to get you started with those changes:
[global]
server string = Samba
disable netbios = Yes
server min protocol = SMB2
smb ports = 445
server signing = required
In addition to the above, you should consider disabling anonymous authentication.
With anonymous authentication enabled (the default), anyone can specify a blank user and password to view shares and other information, and talk to IPC$:
user@client:~$ smbclient -m SMB2 -L server -U ''
Enter 's password:
Sharename Type Comment
--------- ---- -------
share Disk
IPC$ IPC IPC Service (Samba)
Connection to server failed (Error NT_STATUS_CONNECTION_REFUSED)
NetBIOS over TCP disabled -- no workgroup available
To disable this, you can set restrict anonymous in smb.conf:
[global]
restrict anonymous = 2
Restart Samba:
admin@server:~$ sudo systemctl restart smbd
You'll now be denied if you use blank credentials:
user@client:~$ smbclient -m SMB2 -L server -U ''
Enter 's password:
tree connect failed: NT_STATUS_ACCESS_DENIED
One other thing I'll mention is my tendency to add a "valid users" line to each share, and whitelist just the users/groups requiring permission.
Thanks for reading!
Wednesday, 12 September 2018
Plex hardware accelerated transcoding within LXC
I run Plex Media Server within an LXC container on my NAS. The NAS itself is a QNAP TS-251+ but it is running Debian 9. I have all the functions I use separated into individual LXC containers.
Plex runs quite well considering the low powered Celeron J1900 processor, but it does tend to struggle with HD transcoding. I managed to get GPU assisted transcoding working this evening which appears to help considerably!
Here are the requirements:
https://support.plex.tv/articles/115002178853-using-hardware-accelerated-streaming/
Fortunately the Celeron J1900 supports Intel Quick Sync Video.
First of all I checked the host could see the DRI stuff:
tim@host:~$ journalctl
Jul 16 21:29:30 jupiter kernel: [drm] Initialized i915 1.6.0 20160919 for 0000:00:02.0 on minor 0
tim@host:~$ ls -l /dev/dri
total 0
crw-rw---- 1 root video 226, 0 Jul 16 21:29 card0
crw-rw---- 1 root video 226, 64 Jul 16 21:29 controlD64
crw-rw---- 1 root video 226, 128 Jul 16 21:29 renderD128
I then tried mapping the devices through to the container:
tim@host:~$ sudo vi /var/lib/lxc/plex/config
...
lxc.mount.entry = /dev/dri dev/dri none bind,create=dir 0 0
I restarted the container then installed the relevant driver and the vainfo program within it:
tim@plex:~$ sudo apt-get install i965-va-driver vainfo
Both the Plex user and my user were in the video group yet vainfo was just saying 'Abort' instead of giving any useful info. I did some further digging:
tim@plex:~$ strace vainfo
...
open("/dev/dri/renderD128", O_RDWR) = -1 EPERM (Operation not permitted)
open("/dev/dri/card0", O_RDWR) = -1 EPERM (Operation not permitted)
...
The container did not have permissions to talk to those devices.
I did a bit of reading on control groups and device numbers and came up with the following rule to allow the container to use any character device with a major number of 226 (Direct Rendering Infrastructure):
tim@host:~$ sudo vi /var/lib/lxc/plex/config
...
lxc.cgroup.devices.allow = c 226:* rw
lxc.mount.entry = /dev/dri dev/dri none bind,create=dir 0 0
After stopping and starting the container, I could now run vainfo successfully:
tim@plex:~$ vainfo
error: XDG_RUNTIME_DIR not set in the environment.
error: can't connect to X server!
libva info: VA-API version 0.39.4
libva info: va_getDriverName() returns 0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so
libva info: Found init function __vaDriverInit_0_39
libva info: va_openDriver() returns 0
vainfo: VA-API version: 0.39 (libva 1.7.3)
Plex runs quite well considering the low powered Celeron J1900 processor, but it does tend to struggle with HD transcoding. I managed to get GPU assisted transcoding working this evening which appears to help considerably!
Here are the requirements:
https://support.plex.tv/articles/115002178853-using-hardware-accelerated-streaming/
Fortunately the Celeron J1900 supports Intel Quick Sync Video.
First of all I checked the host could see the DRI stuff:
tim@host:~$ journalctl
Jul 16 21:29:30 jupiter kernel: [drm] Initialized i915 1.6.0 20160919 for 0000:00:02.0 on minor 0
tim@host:~$ ls -l /dev/dri
total 0
crw-rw---- 1 root video 226, 0 Jul 16 21:29 card0
crw-rw---- 1 root video 226, 64 Jul 16 21:29 controlD64
crw-rw---- 1 root video 226, 128 Jul 16 21:29 renderD128
I then tried mapping the devices through to the container:
tim@host:~$ sudo vi /var/lib/lxc/plex/config
...
lxc.mount.entry = /dev/dri dev/dri none bind,create=dir 0 0
I restarted the container then installed the relevant driver and the vainfo program within it:
tim@plex:~$ sudo apt-get install i965-va-driver vainfo
Both the Plex user and my user were in the video group yet vainfo was just saying 'Abort' instead of giving any useful info. I did some further digging:
tim@plex:~$ strace vainfo
...
open("/dev/dri/renderD128", O_RDWR) = -1 EPERM (Operation not permitted)
open("/dev/dri/card0", O_RDWR) = -1 EPERM (Operation not permitted)
...
The container did not have permissions to talk to those devices.
I did a bit of reading on control groups and device numbers and came up with the following rule to allow the container to use any character device with a major number of 226 (Direct Rendering Infrastructure):
tim@host:~$ sudo vi /var/lib/lxc/plex/config
...
lxc.cgroup.devices.allow = c 226:* rw
lxc.mount.entry = /dev/dri dev/dri none bind,create=dir 0 0
After stopping and starting the container, I could now run vainfo successfully:
tim@plex:~$ vainfo
error: XDG_RUNTIME_DIR not set in the environment.
error: can't connect to X server!
libva info: VA-API version 0.39.4
libva info: va_getDriverName() returns 0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so
libva info: Found init function __vaDriverInit_0_39
libva info: va_openDriver() returns 0
vainfo: VA-API version: 0.39 (libva 1.7.3)
vainfo: Driver version: Intel i965 driver for Intel(R) Bay Trail - 1.7.3
vainfo: Supported profile and entrypoints
VAProfileMPEG2Simple : VAEntrypointVLD
VAProfileMPEG2Simple : VAEntrypointEncSlice
VAProfileMPEG2Main : VAEntrypointVLD
VAProfileMPEG2Main : VAEntrypointEncSlice
VAProfileH264ConstrainedBaseline: VAEntrypointVLD
VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
VAProfileH264Main : VAEntrypointVLD
VAProfileH264Main : VAEntrypointEncSlice
VAProfileH264High : VAEntrypointVLD
VAProfileH264High : VAEntrypointEncSlice
VAProfileH264StereoHigh : VAEntrypointVLD
VAProfileVC1Simple : VAEntrypointVLD
VAProfileVC1Main : VAEntrypointVLD
VAProfileVC1Advanced : VAEntrypointVLD
VAProfileNone : VAEntrypointVideoProc
VAProfileJPEGBaseline : VAEntrypointVLD
Monday, 7 May 2018
Jenkins on Kali 2017.1
Here's a quick run through of getting the Jenkins Pipeline demos working on Kali 2017.1 for testing purposes.
tim@kali:~$ curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
If we try to use add-apt-respository we will get an error as Kali is not supported:
tim@kali:~$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian stretch stable"
aptsources.distro.NoDistroTemplateException: Error: could not find a distribution template for Kali/kali-rolling
We can instead manually add to /etc/apt/sources.list:
tim@kali:~$ sudo vi /etc/apt/sources.list
deb [arch=amd64] https://download.docker.com/linux/debian stretch stable
tim@kali:~$ sudo apt-get update
tim@kali:~$ sudo apt-get install docker-ce
tim@kali:~$ sudo adduser --disabled-password git
tim@kali:~$ sudo adduser --disabled-password jenkins
We want Jenkins to be able to utilise Docker without having to be root:
tim@kali:~$ sudo adduser jenkins docker
tim@kali:~$ sudo -u jenkins -i
jenkins@kali:~$ mkdir ~/jenkins && cd ~/jenkins
jenkins@kali:~/jenkins$ wget "http://mirrors.jenkins.io/war-stable/latest/jenkins.war"
jenkins@kali:~/jenkins$ java -jar jenkins.war --httpPort=8080
tim@kali:~$ sudo apt-get install git-core
tim@kali:~$ sudo systemctl start ssh
tim@kali:~$ sudo -u git -i
git@kali:~$ mkdir ~/.ssh ~/repo
git@kali:~$ chmod 0700 ~/.ssh
git@kali:~$ cd ~/repo
git@kali:~/repo$ git init --bare
tim@kali:~$ sudo ssh-keygen
tim@kali:~$ sudo -u jenkins ssh-keygen
tim@kali:~$ cat ~/.ssh/id_rsa.pub | sudo -u git tee -a /home/git/.ssh/authorized_keys
tim@kali:~$ sudo -u jenkins cat /home/jenkins/.ssh/id_rsa.pub | sudo -u git tee -a /home/git/.ssh/authorized_keys
tim@kali:~$ mkdir repo && cd repo
tim@kali:~/repo$ git init
tim@kali:~/repo$ vi Jenkinsfile
tim@kali:~/repo$ git add .
tim@kali:~/repo$ git commit
tim@kali:~/repo$ git remote add origin git@localhost:repo
tim@kali:~/repo$ git push --set-upstream origin master
You should now be able to successfully run the Pipeline demos here:
https://jenkins.io/doc/pipeline/tour/hello-world/
You can set the Git server in Jenkins as git@localhost:repo and it will work the same as a remote Git server (BitBucket etc).
As this is for testing purposes, if you reboot you'll have to start SSH and Jenkins again manually.
Install Docker
Add the Docker package certificate:tim@kali:~$ curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
If we try to use add-apt-respository we will get an error as Kali is not supported:
tim@kali:~$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian stretch stable"
aptsources.distro.NoDistroTemplateException: Error: could not find a distribution template for Kali/kali-rolling
We can instead manually add to /etc/apt/sources.list:
tim@kali:~$ sudo vi /etc/apt/sources.list
deb [arch=amd64] https://download.docker.com/linux/debian stretch stable
tim@kali:~$ sudo apt-get update
tim@kali:~$ sudo apt-get install docker-ce
Create users for services
We will be using key authentication or sudo so no need for passwords on the service accounts:tim@kali:~$ sudo adduser --disabled-password git
tim@kali:~$ sudo adduser --disabled-password jenkins
We want Jenkins to be able to utilise Docker without having to be root:
tim@kali:~$ sudo adduser jenkins docker
Download and run Jenkins
When testing I prefer this method over the Debian package as it is all self-contained:tim@kali:~$ sudo -u jenkins -i
jenkins@kali:~$ mkdir ~/jenkins && cd ~/jenkins
jenkins@kali:~/jenkins$ wget "http://mirrors.jenkins.io/war-stable/latest/jenkins.war"
jenkins@kali:~/jenkins$ java -jar jenkins.war --httpPort=8080
Set up Git remote
This will set up a repo you can access over SSH:tim@kali:~$ sudo apt-get install git-core
tim@kali:~$ sudo systemctl start ssh
tim@kali:~$ sudo -u git -i
git@kali:~$ mkdir ~/.ssh ~/repo
git@kali:~$ chmod 0700 ~/.ssh
git@kali:~$ cd ~/repo
git@kali:~/repo$ git init --bare
Set up SSH keys
Create keys for your user and the Jenkins user and add to Git's authorized_keys file:tim@kali:~$ sudo ssh-keygen
tim@kali:~$ sudo -u jenkins ssh-keygen
tim@kali:~$ cat ~/.ssh/id_rsa.pub | sudo -u git tee -a /home/git/.ssh/authorized_keys
tim@kali:~$ sudo -u jenkins cat /home/jenkins/.ssh/id_rsa.pub | sudo -u git tee -a /home/git/.ssh/authorized_keys
Set up local Git repo
Push your test Jenkinsfile to the remote repo:tim@kali:~$ mkdir repo && cd repo
tim@kali:~/repo$ git init
tim@kali:~/repo$ vi Jenkinsfile
tim@kali:~/repo$ git add .
tim@kali:~/repo$ git commit
tim@kali:~/repo$ git remote add origin git@localhost:repo
tim@kali:~/repo$ git push --set-upstream origin master
You should now be able to successfully run the Pipeline demos here:
https://jenkins.io/doc/pipeline/tour/hello-world/
You can set the Git server in Jenkins as git@localhost:repo and it will work the same as a remote Git server (BitBucket etc).
As this is for testing purposes, if you reboot you'll have to start SSH and Jenkins again manually.
Tuesday, 1 August 2017
Solution to Error 0x80070714 when attempting to upgrade to Windows 10 version 1703 (Creators Update)
I was attempting to patch a Windows 10 Pro machine from version 1607 to 1703 (Creators Update), however the process kept failing with Error 0x80070714:
Feature update to Windows 10, version 1703 - Error 0x80070714
The solution was to stop the MSSQLSERVER service before kicking off the update:
Right-click the Start button (or press Windows+X) and choose "Command Prompt (Admin)" then type the following:
C:\WINDOWS\system32>net stop MSSQLSERVER
The SQL Server (MSSQLSERVER) service is stopping.
The SQL Server (MSSQLSERVER) service was stopped successfully.
Once the machine reboots after the update the service will be running again, so this shouldn't do any harm.
You may have other MSSQL instances with different service names, the same process applies.
Feature update to Windows 10, version 1703 - Error 0x80070714
The solution was to stop the MSSQLSERVER service before kicking off the update:
Right-click the Start button (or press Windows+X) and choose "Command Prompt (Admin)" then type the following:
C:\WINDOWS\system32>net stop MSSQLSERVER
The SQL Server (MSSQLSERVER) service is stopping.
The SQL Server (MSSQLSERVER) service was stopped successfully.
Once the machine reboots after the update the service will be running again, so this shouldn't do any harm.
You may have other MSSQL instances with different service names, the same process applies.
Monday, 10 July 2017
Down the OVA compatibility rabbit hole
I recently volunteered to create a B2R CTF for SecTalks_BNE. It was fairly simple to create the content within the machine, however I came across a few hurdles when trying to make the machine as portable as possible. I wanted it to be easily usable on VirtualBox as well as VMware Fusion, Player and Workstation.
Before embarking on this project I had foolishly assumed I could just create the VM in VirtualBox and then "Export Appliance..." to create a portable OVA. If only it were that simple!
The OVA files that were created by VirtualBox worked fine by other VirtualBox users, but VMware users were getting various levels of success; Fusion wouldn't play nice at all.
I've created this post so that I remember what to do again down the track, and as a side bonus hopefully someone else will benefit or learn from it!
The OVF (Open Virtualisation Format) specifies the configuration of the virtual machine. The disk images contain data held by the virtual drives.
After creating a few OVA files from ESXi, my testing concluded that VirtualBox happily accepted a VMware OVA but VMware had a hard time working with a VirtualBox OVA.
One solution would be to do all my development on ESXi, but I quite like using VirtualBox on my laptop!
I then unpacked said OVA, prepared the replacement disk image with VirtualBox and rolled my own OVA using a few commands.
The initial OVA contained the following:
$ tar xvf covfefe.ova
covfefe.ovf
covfefe.mf
disk-0.vmdk
To prepare the replacement disk-0.vmdk file, I ran through the steps in my earlier blog post and converted from VDI to VMDK with clonemedium (also mentioned in the same post).
After replacing the VMDK file, I edited the size entry in the OVF to reflect the new file:
<File ovf:href="disk-0.vmdk" ovf:id="file1" ovf:size="464093696"/>
Once I finished editing the OVF I had to create the correct checksums to use in the manifest file:
$ shasum covfefe.ovf disk-0.vmdk
249eef04df64f45a185e809e18fb285cadfcd6f0 covfefe.ovf
ae1718beb7d5eb7dfb5158718b0eceda812512a2 disk-0.vmdk
After the changes my manifest file looked like this:
$ cat covfefe.mf
SHA1 (covfefe.ovf)= 249eef04df64f45a185e809e18fb285cadfcd6f0
SHA1 (disk-0.vmdk)= ae1718beb7d5eb7dfb5158718b0eceda812512a2
I then reassembled the OVA file:
$ tar cf covfefe.ova covfefe.ovf covfefe.mf disk-0.vmdk
Just as a test I also did the assembly using OVF Tool as it did some extra checks while assembling:
$ /Applications/VMware\ OVF\ Tool/ovftool covfefe.ovf covfefe.ova
The OVA has worked flawlessly on everything I've tested it on so far which is VirtualBox 5.1.22, VMware ESXi 6.5, Fusion 8.5.8 and Player 6.0.1.
Before embarking on this project I had foolishly assumed I could just create the VM in VirtualBox and then "Export Appliance..." to create a portable OVA. If only it were that simple!
The OVA files that were created by VirtualBox worked fine by other VirtualBox users, but VMware users were getting various levels of success; Fusion wouldn't play nice at all.
I've created this post so that I remember what to do again down the track, and as a side bonus hopefully someone else will benefit or learn from it!
Let me explain some acronyms first
An OVA file is an Open Virtualisation Appliance. It's essentially a tarball containing an OVF, one or more disk images (usually VMDK files) and a manifest (checksum) file.The OVF (Open Virtualisation Format) specifies the configuration of the virtual machine. The disk images contain data held by the virtual drives.
Gathering test data
To get some VMware test data I dragged my old HP N54L out of the cupboard and installed ESXi 6.5 on it. The disk performance was horrendously slow until I disabled the problematic AHCI driver as per this blog.After creating a few OVA files from ESXi, my testing concluded that VirtualBox happily accepted a VMware OVA but VMware had a hard time working with a VirtualBox OVA.
My VirtualBox solution
I decided to keep things simple and use ESXi to generate the initial OVA. I chose to target VMware 4 to keep it compatible with pretty much everything. After this step ESXi was no longer required.I then unpacked said OVA, prepared the replacement disk image with VirtualBox and rolled my own OVA using a few commands.
The initial OVA contained the following:
$ tar xvf covfefe.ova
covfefe.ovf
covfefe.mf
disk-0.vmdk
To prepare the replacement disk-0.vmdk file, I ran through the steps in my earlier blog post and converted from VDI to VMDK with clonemedium (also mentioned in the same post).
After replacing the VMDK file, I edited the size entry in the OVF to reflect the new file:
<File ovf:href="disk-0.vmdk" ovf:id="file1" ovf:size="464093696"/>
Once I finished editing the OVF I had to create the correct checksums to use in the manifest file:
$ shasum covfefe.ovf disk-0.vmdk
249eef04df64f45a185e809e18fb285cadfcd6f0 covfefe.ovf
ae1718beb7d5eb7dfb5158718b0eceda812512a2 disk-0.vmdk
After the changes my manifest file looked like this:
$ cat covfefe.mf
SHA1 (covfefe.ovf)= 249eef04df64f45a185e809e18fb285cadfcd6f0
SHA1 (disk-0.vmdk)= ae1718beb7d5eb7dfb5158718b0eceda812512a2
I then reassembled the OVA file:
$ tar cf covfefe.ova covfefe.ovf covfefe.mf disk-0.vmdk
Just as a test I also did the assembly using OVF Tool as it did some extra checks while assembling:
$ /Applications/VMware\ OVF\ Tool/ovftool covfefe.ovf covfefe.ova
The OVA has worked flawlessly on everything I've tested it on so far which is VirtualBox 5.1.22, VMware ESXi 6.5, Fusion 8.5.8 and Player 6.0.1.
Prepping a Linux VM for OVA export
These are the steps I recommend to prepare a Linux VM for OVA export. It should keep the size down to a minimum and prevent headaches and confusion down the track!
I'm using VirtualBox but the info applies to VMware. You'll just have to read the VMware documentation for the compacting section.
I am running these commands from a Debian Stretch live CD inside the guest, and have mounted the destination filesystem (/dev/sda1) as /mnt:
$ sudo mount /dev/sda1 /mnt
I disable this by adding the kernel parameter "net.ifnames=0", you can do this within /mnt/etc/default/grub:
GRUB_CMDLINE_LINUX="net.ifnames=0"
Then run update-grub from within a chroot:
$ sudo mount --bind /dev /mnt/dev
$ sudo mount --bind /proc /mnt/proc
$ sudo mount --bind /sys /mnt/sys
$ sudo chroot /mnt
# update-grub
# exit
$ sudo umount /mnt/dev /mnt/proc /mnt/sys
You'll now want to adjust /etc/network/interfaces (or equivalent) accordingly to reflect eth0 instead of enp0s17 or whatever.
$ sudo find /mnt/var/log -type f -exec sh -c 'cat /dev/null > {}' \;
$ sudo umount /mnt
$ sudo e2fsck -E discard /dev/sda1
If you're using a VDI file, you can use modifymedium --compact:
https://www.virtualbox.org/manual/ch08.html#vboxmanage-modifyvdi
If you're using a VMDK file, you can use clonemedium:
https://www.virtualbox.org/manual/ch08.html#vboxmanage-clonevdi
I'm using VirtualBox but the info applies to VMware. You'll just have to read the VMware documentation for the compacting section.
I am running these commands from a Debian Stretch live CD inside the guest, and have mounted the destination filesystem (/dev/sda1) as /mnt:
$ sudo mount /dev/sda1 /mnt
Disable systemd from renaming network interfaces
If you leave this enabled, you'll have different network interface names for VirtualBox and VMware so your interface definitions won't work in both!I disable this by adding the kernel parameter "net.ifnames=0", you can do this within /mnt/etc/default/grub:
GRUB_CMDLINE_LINUX="net.ifnames=0"
Then run update-grub from within a chroot:
$ sudo mount --bind /dev /mnt/dev
$ sudo mount --bind /proc /mnt/proc
$ sudo mount --bind /sys /mnt/sys
$ sudo chroot /mnt
# update-grub
# exit
$ sudo umount /mnt/dev /mnt/proc /mnt/sys
You'll now want to adjust /etc/network/interfaces (or equivalent) accordingly to reflect eth0 instead of enp0s17 or whatever.
Sanitise the log directory
Nuke the contents but leave files in place:$ sudo find /mnt/var/log -type f -exec sh -c 'cat /dev/null > {}' \;
Discard unallocated blocks
Unmount the filesystem then discard unallocated blocks:$ sudo umount /mnt
$ sudo e2fsck -E discard /dev/sda1
Compact the disk image
This is done from the host, not the guest.If you're using a VDI file, you can use modifymedium --compact:
https://www.virtualbox.org/manual/ch08.html#vboxmanage-modifyvdi
If you're using a VMDK file, you can use clonemedium:
https://www.virtualbox.org/manual/ch08.html#vboxmanage-clonevdi
Wednesday, 26 April 2017
Installing Raspbian on the Raspberry Pi 3 using raspbian-ua-netinst
I really like using the Raspbian unattended netinstaller (raspbian-ua-netinst) for doing headless installs of Raspbian to Raspberry Pi devices. You pretty much write the installer image to SD, create a configuration file, then insert the SD into the Pi and let it do the rest.
I wasn't able to install Raspbian to my Raspberry Pi 3 using the current latest build (1.0.9) of raspbian-ua-netinst as it still lacks support for this newer hardware.
Below is a quick guide on what I did to get it up and running successfully. I ran this from a Raspberry Pi but you could just as easily use any Linux machine:
Pull down the v1.1.x branch from GitHub:
$ git clone -b v1.1.x https://github.com/debian-pi/raspbian-ua-netinst.git
Download and build:
$ cd raspbian-ua-netinst
$ ./build.sh
Create the images you can then write to SD, this requires root for the loopback setup:
$ sudo ./buildroot.sh
If using a Raspberry Pi with limited swap like myself, you may get an error when creating the xz archive due to it being unable to allocate sufficient memory to xz. It's no problem as you can use the uncompressed or bz2 image.
As an example you could run bzcat raspbian-ua-netinst-20170426-gited24416.img.bz2 redirected to the destination SD card (the card itself, not a partition device).
Hopefully this post will be redundant soon when a newer raspbian-ua-netinst is released with Raspberry Pi 3 support, but until then I hope this is useful to someone!
I wasn't able to install Raspbian to my Raspberry Pi 3 using the current latest build (1.0.9) of raspbian-ua-netinst as it still lacks support for this newer hardware.
Below is a quick guide on what I did to get it up and running successfully. I ran this from a Raspberry Pi but you could just as easily use any Linux machine:
Pull down the v1.1.x branch from GitHub:
$ git clone -b v1.1.x https://github.com/debian-pi/raspbian-ua-netinst.git
Download and build:
$ cd raspbian-ua-netinst
$ ./build.sh
Create the images you can then write to SD, this requires root for the loopback setup:
$ sudo ./buildroot.sh
If using a Raspberry Pi with limited swap like myself, you may get an error when creating the xz archive due to it being unable to allocate sufficient memory to xz. It's no problem as you can use the uncompressed or bz2 image.
As an example you could run bzcat raspbian-ua-netinst-20170426-gited24416.img.bz2 redirected to the destination SD card (the card itself, not a partition device).
Hopefully this post will be redundant soon when a newer raspbian-ua-netinst is released with Raspberry Pi 3 support, but until then I hope this is useful to someone!
Monday, 19 December 2016
Issue where KVM guest freezes just before installation of CentOS 7
I've been playing around with KVM on CentOS 7 in preparation for the RHCE exam. I was experiencing an issue where the guest virtual machine would freeze just before attempting an install (again, CentOS 7 as the guest).
The testing machine is quite old (has an Intel Core 2 6400 CPU) but it hasn't shown any other symptoms of hardware issues.
The logs didn't appear to show anything of interest other than some debugging information which is apparently normal:
[20389.379023] kvm [19537]: vcpu0 unhandled rdmsr: 0x60d
[20389.379034] kvm [19537]: vcpu0 unhandled rdmsr: 0x3f8
[20389.379039] kvm [19537]: vcpu0 unhandled rdmsr: 0x3f9
[20389.379043] kvm [19537]: vcpu0 unhandled rdmsr: 0x3fa
[20389.379048] kvm [19537]: vcpu0 unhandled rdmsr: 0x630
[20389.379053] kvm [19537]: vcpu0 unhandled rdmsr: 0x631
[20389.379057] kvm [19537]: vcpu0 unhandled rdmsr: 0x632
Anyway, I was able to work around the issue by feeding the "--cpu host" option to virt-install, or by ticking "Copy host CPU configuration" under the CPUs tab of the VM configuration.
Hope this helps save someone else some time!
The testing machine is quite old (has an Intel Core 2 6400 CPU) but it hasn't shown any other symptoms of hardware issues.
The logs didn't appear to show anything of interest other than some debugging information which is apparently normal:
[20389.379023] kvm [19537]: vcpu0 unhandled rdmsr: 0x60d
[20389.379034] kvm [19537]: vcpu0 unhandled rdmsr: 0x3f8
[20389.379039] kvm [19537]: vcpu0 unhandled rdmsr: 0x3f9
[20389.379043] kvm [19537]: vcpu0 unhandled rdmsr: 0x3fa
[20389.379048] kvm [19537]: vcpu0 unhandled rdmsr: 0x630
[20389.379053] kvm [19537]: vcpu0 unhandled rdmsr: 0x631
[20389.379057] kvm [19537]: vcpu0 unhandled rdmsr: 0x632
Anyway, I was able to work around the issue by feeding the "--cpu host" option to virt-install, or by ticking "Copy host CPU configuration" under the CPUs tab of the VM configuration.
Hope this helps save someone else some time!
Monday, 10 October 2016
Installing Debian on the APU2
This is a short post detailing the install of Debian on the PC Engines APU2 using PXE.
First of all you'll need to ensure you are running version 160311 or newer BIOS. You can find the BIOS update details here. If the PXE options are missing then there's a good chance you aren't running a new enough BIOS!
Connect to the system's console via the serial port using a baud rate of 115,200. I typically use screen on Linux/macOS or PuTTY on Windows.
Start the APU2 and press Ctrl+B when prompted to enter iPXE, or choose iPXE from the boot selection menu (F10).
Attept boot from PXE using DHCP:
iPXE> autoboot
If all is well you will get to the "Debian GNU/Linux installer boot menu" heading, press TAB to edit the Install menu entry.
This should bring up something along the lines of:
> debian-installer/amd64/linux vga=788 initrd=debian-installer/amd64/initrd.gz --- quiet
You'll want to define the serial console by adding the console parameter to the end (and preseed parameter if used):
> debian-installer/amd64/linux vga=788 initrd=debian-installer/amd64/initrd.gz --- quiet console=ttyS0,115200
Press enter and you should be on your way!
First of all you'll need to ensure you are running version 160311 or newer BIOS. You can find the BIOS update details here. If the PXE options are missing then there's a good chance you aren't running a new enough BIOS!
Connect to the system's console via the serial port using a baud rate of 115,200. I typically use screen on Linux/macOS or PuTTY on Windows.
Start the APU2 and press Ctrl+B when prompted to enter iPXE, or choose iPXE from the boot selection menu (F10).
Attept boot from PXE using DHCP:
iPXE> autoboot
If all is well you will get to the "Debian GNU/Linux installer boot menu" heading, press TAB to edit the Install menu entry.
This should bring up something along the lines of:
> debian-installer/amd64/linux vga=788 initrd=debian-installer/amd64/initrd.gz --- quiet
You'll want to define the serial console by adding the console parameter to the end (and preseed parameter if used):
> debian-installer/amd64/linux vga=788 initrd=debian-installer/amd64/initrd.gz --- quiet console=ttyS0,115200
Press enter and you should be on your way!
PXE boot Debian using RouterOS as PXE server
I would typically use a Linux server for the purposes of PXE booting, but this is so straightforward it's a very attractive option. I'm using a MikroTik RB2011 (RouterOS v6.34.6) successfully.
This example assumes your router's LAN IP is 172.16.8.1 and the local subnet is 172.16.8.0/24.
First of all, download the netboot archive to a Linux machine (I'm using a Raspberry Pi here):
tim@raspberrypi /tmp $ wget http://ftp.au.debian.org/debian/dists/jessie/main/installer-amd64/current/images/netboot/netboot.tar.gz
tim@raspberrypi /tmp $ wget http://ftp.au.debian.org/debian/dists/jessie/main/installer-amd64/current/images/SHA256SUMS
Check that your archive matches the checksum file:
tim@raspberrypi /tmp $ grep `sha256sum netboot.tar.gz` SHA256SUMS
SHA256SUMS:460e2ed7db2d98edb09e5413ad72b71e3132a9628af01d793aaca90e7b317d46 ./netboot/netboot.tar.gz
Extract the archive to a tftp directory:
tim@raspberrypi /tmp $ mkdir tftp && tar xf netboot.tar.gz -C tftp
Copy tftp folder to the MikroTik:
tim@raspberrypi /tmp $ scp -r tftp admin-tim@172.16.8.1:
On the MikroTik, configure TFTP on MikroTik with a base directory of /tftp (omitting req-filename matches all):
[admin-tim@MikroTik] /ip tftp add ip-address=172.16.8.0/24 real-filename=tftp
Configure DHCP for PXE booting:
[admin-tim@MikroTik] /ip dhcp-server network set [ find address=172.16.8.0/24 ] boot-file-name=pxelinux.0 next-server=172.16.8.1
This example assumes your router's LAN IP is 172.16.8.1 and the local subnet is 172.16.8.0/24.
First of all, download the netboot archive to a Linux machine (I'm using a Raspberry Pi here):
tim@raspberrypi /tmp $ wget http://ftp.au.debian.org/debian/dists/jessie/main/installer-amd64/current/images/netboot/netboot.tar.gz
tim@raspberrypi /tmp $ wget http://ftp.au.debian.org/debian/dists/jessie/main/installer-amd64/current/images/SHA256SUMS
Check that your archive matches the checksum file:
tim@raspberrypi /tmp $ grep `sha256sum netboot.tar.gz` SHA256SUMS
SHA256SUMS:460e2ed7db2d98edb09e5413ad72b71e3132a9628af01d793aaca90e7b317d46 ./netboot/netboot.tar.gz
Extract the archive to a tftp directory:
tim@raspberrypi /tmp $ mkdir tftp && tar xf netboot.tar.gz -C tftp
Copy tftp folder to the MikroTik:
tim@raspberrypi /tmp $ scp -r tftp admin-tim@172.16.8.1:
On the MikroTik, configure TFTP on MikroTik with a base directory of /tftp (omitting req-filename matches all):
[admin-tim@MikroTik] /ip tftp add ip-address=172.16.8.0/24 real-filename=tftp
Configure DHCP for PXE booting:
[admin-tim@MikroTik] /ip dhcp-server network set [ find address=172.16.8.0/24 ] boot-file-name=pxelinux.0 next-server=172.16.8.1
Friday, 13 April 2012
A potential backup solution for small sites running VMware ESXi
Today, external consumer USB3 and/or eSATA drives can be a great low cost alternative to tape. For most small outfits, they fulfil the speed and capacity requirements for nightly backups. I use the same rotation scheme with these drives as I did tape with great success.
Unfortunately these drives can't easily be utilised by those running virtualised servers on top of ESXi. VMware offers SCSI pass-through as a supported option, however the tape drives and media are quite expensive by comparison.
VMware offered a glimpse of hope with their USB pass-through introduced in ESXi 4.1, but it proved to have extremely poor throughput (~7MB/sec) so can realistically only shift a couple of hundred GB at most per night.
I have trialled some USB over IP devices; the best of these can lift the throughput from ~7MB/sec to ~25MB/sec, but the drivers can be problematic and are often only available for Windows platforms.
This got me thinking about presenting a USB3 controller via ESXi's VMDirectPath I/O feature.
VMDirectPath I/O requires a CPU and motherboard capable of Intel Virtualization Technology for Directed I/O (VT-d) or AMD IP Virtualization Technology (IOMMU). It also requires that your target VM is at a hardware level of 7 or greater. A full list of requirements can be found at http://kb.vmware.com/kb/1010789.
I tested pass-through on a card with the NEC/Renesas uPD720200A chipset (Lindy part # 51122) running firmware 4015. The test VM runs Windows Server 2003R2 with the Renesas 2.1.28.1 driver. I had to configure the VM with pciPassthru0.msiEnabled = "FALSE" as per http://www.vmware.com/pdf/vsp_4_vmdirectpath_host.pdf or the device would show up with a yellow bang in Device Manager and would not function.
The final result - over 80MB/sec throughput (both read and write) from a Seagate 2.5" USB3 drive!
Unfortunately these drives can't easily be utilised by those running virtualised servers on top of ESXi. VMware offers SCSI pass-through as a supported option, however the tape drives and media are quite expensive by comparison.
VMware offered a glimpse of hope with their USB pass-through introduced in ESXi 4.1, but it proved to have extremely poor throughput (~7MB/sec) so can realistically only shift a couple of hundred GB at most per night.
I have trialled some USB over IP devices; the best of these can lift the throughput from ~7MB/sec to ~25MB/sec, but the drivers can be problematic and are often only available for Windows platforms.
This got me thinking about presenting a USB3 controller via ESXi's VMDirectPath I/O feature.
VMDirectPath I/O requires a CPU and motherboard capable of Intel Virtualization Technology for Directed I/O (VT-d) or AMD IP Virtualization Technology (IOMMU). It also requires that your target VM is at a hardware level of 7 or greater. A full list of requirements can be found at http://kb.vmware.com/kb/1010789.
I tested pass-through on a card with the NEC/Renesas uPD720200A chipset (Lindy part # 51122) running firmware 4015. The test VM runs Windows Server 2003R2 with the Renesas 2.1.28.1 driver. I had to configure the VM with pciPassthru0.msiEnabled = "FALSE" as per http://www.vmware.com/pdf/vsp_4_vmdirectpath_host.pdf or the device would show up with a yellow bang in Device Manager and would not function.
The final result - over 80MB/sec throughput (both read and write) from a Seagate 2.5" USB3 drive!
Thursday, 8 October 2009
BlackBerry MDS proxy pain
I'm just having a rant about MDS SSL connections through a proxy. Non-SSL traffic will work fine, however SSL traffic appears to go direct even when proxy settings have been defined as per KB11028. My regular expression matches the addresses fine.
Surely people out there want/need to proxy all their BES MDS traffic?
Surely people out there want/need to proxy all their BES MDS traffic?
Wednesday, 7 January 2009
DNS resolution on iPhone
I've been playing with a few iPhones lately and have had trouble getting WiFi working through our proxy. After much hair pulling the problem turns out to be a feature in the iPhone DNS resolver that refuses to look up any hostname ending in ".local". This also appears to be a problem on Mac OS X:
http://support.apple.com/kb/HT2385?viewlocale=en_US
With OS X you can add "local" to the Search Domains field and disable this behaviour, unfortunately it doesn't work for the iPhone.
http://support.apple.com/kb/HT2385?viewlocale=en_US
With OS X you can add "local" to the Search Domains field and disable this behaviour, unfortunately it doesn't work for the iPhone.
Sunday, 31 August 2008
Data destruction
After cleaning my home office I was left with some old hard drives to dispose of, this got me thinking about data destruction. In the past I cleared my drives with a couple of passes of random data using dd, but is this thorough enough?
This time round I have used a free bootable CD called CopyWipe (great utility, BootIt NG is also worth a mention). Each drive was given 5 passes, and then taken to with a hammer just to be sure. I've linked a picture to the "after" shot.
I can see data destruction being a larger problem as time goes on. I'd be interested to know the techniques others use for this problem.
This time round I have used a free bootable CD called CopyWipe (great utility, BootIt NG is also worth a mention). Each drive was given 5 passes, and then taken to with a hammer just to be sure. I've linked a picture to the "after" shot.
I can see data destruction being a larger problem as time goes on. I'd be interested to know the techniques others use for this problem.
Wednesday, 27 August 2008
Archiving files from my Topfield PVR
I've had a Topfield PVR for quite a few years now. The unit is great, I can't fault it really. Until recently I did however have one ongoing problem; I kept running out of space! To help combat the space problem I upgraded to a Samsung 400GB drive but this was only a short term band-aid.
The next solution was commissioning a Linksys NSLU2 running uNSLUng and ftpd-topfield to allow FTP access to the unit (my computer isn't anywhere near the TV and the Topfield only has a USB port). So the space problem on the Topfield was fixed, but I had loads of transport stream files sitting on my computer. It was just too expensive (time-wise) to edit out all the ads, convert to MPEG-2 and burn to DVD or DivX. So last weekend I scripted it:
The whole thing was fairly trivial after reading the CLI documentation for each program, but if you need a hand feel free to contact me.
The next solution was commissioning a Linksys NSLU2 running uNSLUng and ftpd-topfield to allow FTP access to the unit (my computer isn't anywhere near the TV and the Topfield only has a USB port). So the space problem on the Topfield was fixed, but I had loads of transport stream files sitting on my computer. It was just too expensive (time-wise) to edit out all the ads, convert to MPEG-2 and burn to DVD or DivX. So last weekend I scripted it:
- Create ad removal cutpoints with comskip
- Feed the cutpoints into ProjectX then demux
- Combine the audio and video into an MPEG-2 file with mplex
- Encode with Dr. DivX OSS
The whole thing was fairly trivial after reading the CLI documentation for each program, but if you need a hand feel free to contact me.
Tuesday, 29 July 2008
Moving from VMware Server to ESXi
At home I'm currently using VMware Server with Windows 2003 as the host OS. In addition to running 5 guest operating systems, the host OS performs the following tasks:
If anyone out there has made the move, I'd love to hear their experiences and feedback!
- Shuts down the server in the event of an extended power outage thanks to APC PowerChute.
- Backs up the VMDK files to locally attached USB hard drives.
- Allows remote administration via Terminal Services (Remote Desktop).
- Hosts my complex virtual networking services including NAT (with port forwards) and routing for the virtual machines.
If anyone out there has made the move, I'd love to hear their experiences and feedback!
Tuesday, 3 June 2008
Killing the registration prompt in RawShooter Essentials 2006
I'm still using the final version of RawShooter Essentials as it supports my SLR's RAW format (Adobe have now bought out Pixmantec, so this is no longer being updated or supplied by them; it is only available from other sources such as download.com). So, if you've managed to acquire it you will find that whenever RawShooter Essentials is freshly installed it will prompt you to register each time you start the program. As Adobe have shut down Pixmantec's servers the registration will fail. I compared the Windows registry from a fresh install with an existing (registered) copy and found that the registered copy had some extra registry entries. With these entries you should be able to kill the annoying prompt:
Windows Registry Editor Version 5.00Save the above contents to a file with the extention ".REG" and double-click it to install the entries. You may have to fix the lines if they wrap. You will now find that next time you open the program the registration prompt will be gone.
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\CLSID\{0F540988-8449-4C30-921E-74BCCEA70535}]
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\CLSID\{0F540988-8449-4C30-921E-74BCCEA70535}\ProgId]
Monday, 17 March 2008
VoIP headaches
I've recently signed up with PennyTel to get better prices on phone calls. This was after two relatives of mine both recommended PennyTel and said how easy the whole thing was to set up when using a Linksys SPA-3102.
OK, so I signed up and purchased the Linksys device. I set the networking stuff through the phone then followed the guide on the PennyTel website to configure SIP (VoIP connectivity stuff). I was feeling pretty good about the whole thing, that is until I made the first phone call!
I thought I'd try to impress a mate so I called up one of my tech savvy friends and told them I was using VoIP to talk to them. The quality sounded quite good, then after 32 seconds the call dropped out! I had called a mobile so I thought it may just be a glitch. The next two calls resulted in the same drop out after 32 seconds. By this stage my friend thought it was quite amusing that my new phone service was so unreliable after I had been boasting about the cheap call rates!
After hours of Googling and messages back and forth between PennyTel support, I still hadn't managed to avoid the call drop out, or another intermittent problem where the SIP registration was randomly failing. The settings looked fine, and PennyTel didn't appear to have any outages as I tested things with a soft phone from another DSL connection. I was really regretting the whole thing, and getting pretty pissed off. I had a think about the whole scenario, and the only thing I hadn't eliminated was my DrayTek Vigor 2600We ADSL router. I had already set the port forwards required for the Linksys SPA (UDP 5060-5061 and 16384-16482) so thought nothing more of router configuration. As a last resort, I searched the Internet for people running VoIP through their DrayTek to see if any incompatibilities existed. I came across a site with someone experiencing my exact problem, and they had a workaround! It appears that the 2600We has a SIP application layer proxy enabled by default. This really confuses things on the Linksys and has to be disabled. After telnetting to the device and entering the following command, things were working great:
OK, so I signed up and purchased the Linksys device. I set the networking stuff through the phone then followed the guide on the PennyTel website to configure SIP (VoIP connectivity stuff). I was feeling pretty good about the whole thing, that is until I made the first phone call!
I thought I'd try to impress a mate so I called up one of my tech savvy friends and told them I was using VoIP to talk to them. The quality sounded quite good, then after 32 seconds the call dropped out! I had called a mobile so I thought it may just be a glitch. The next two calls resulted in the same drop out after 32 seconds. By this stage my friend thought it was quite amusing that my new phone service was so unreliable after I had been boasting about the cheap call rates!
After hours of Googling and messages back and forth between PennyTel support, I still hadn't managed to avoid the call drop out, or another intermittent problem where the SIP registration was randomly failing. The settings looked fine, and PennyTel didn't appear to have any outages as I tested things with a soft phone from another DSL connection. I was really regretting the whole thing, and getting pretty pissed off. I had a think about the whole scenario, and the only thing I hadn't eliminated was my DrayTek Vigor 2600We ADSL router. I had already set the port forwards required for the Linksys SPA (UDP 5060-5061 and 16384-16482) so thought nothing more of router configuration. As a last resort, I searched the Internet for people running VoIP through their DrayTek to see if any incompatibilities existed. I came across a site with someone experiencing my exact problem, and they had a workaround! It appears that the 2600We has a SIP application layer proxy enabled by default. This really confuses things on the Linksys and has to be disabled. After telnetting to the device and entering the following command, things were working great:
sys sip_alg 0
Note that you may need to upgrade your DrayTek firmware for this command to be available.
After the changes I made some calls and no longer got disconnected after 32 seconds! Woohoo! At the end of the day I'm glad I chose VoIP for the cost savings, even though it caused me grief the first few days.
Update: One other setting I have found needed a bit of tweaking was the dial plan. Here is my current Brisbane dial plan for an example:(000S0<:@gw0>|<:07>[3-5]xxxxxxxS0|0[23478]xxxxxxxxS0|1[38]xx.<:@gw0>|19xx.!|xx.)
Monday, 13 March 2006
Keeping up with multiple Blogs
After a quick search for RSS aggregators, I found Google Reader to do exactly what I want. It saves so much of my time monitoring RSS feeds from a central location, and being web-based means I can access it from anywhere. In case you are wondering, I don't work for Google!
Update: Since publishing this, I've had a few people tell me how much Google Reader sucks compared to the alternatives out there. Greg Black recommended Bloglines to me, and after giving it a go I must say that it's a much better solution. Looks like I'll stick with this one for a while.
Update: Since publishing this, I've had a few people tell me how much Google Reader sucks compared to the alternatives out there. Greg Black recommended Bloglines to me, and after giving it a go I must say that it's a much better solution. Looks like I'll stick with this one for a while.
Subscribe to:
Posts (Atom)