Technical Tips for video meetings

It seems a lot of people have already written a lot on the etiquette of video meetings, so in this little post, I’ll try to contribute with some of the technical tips which doesn’t seem to be covered as much.

Network connection

While wifi seem to work fine mostly, it can cause issues. If you have the option to use a wired connection for you device used for videomeetings, do so. It will have less latency then any wifi-connection and improve the experience.

If having a wired connection isn’t an option, make the most of your wifi:

  • 5Ghz network should be preferred, as it has more bandwidth – but it has a shorter range.
  • Make sure to have a strong wifi signal. Moving to the garden (weather allowing) might be fun, but if it may have a significant impact on your signal strength and thus the video conference experience.
  • If possible be as few devices as possible on the wifi network used for video conferencing. Move IoT devices, gaming consoles, and other gear – to another network (2.4 Ghz – and use the 5Ghz network for video conferences).

The VPN Challenge

You may be required to run a VPN Client. Most Video conference provideres are aware of this as are internal security departments. It should usually not be a challenge to run the video conference through a VPN, but it may cause 3 other issues:

  • The VPN server/-service needs capacity (physical resources, licenses) to support all users.
  • The VPN service needs bandwidth to support the users connected.
  • The VPN service cause latency which may degrade the experience.

If the VPN service has capacity issues, you would have issues getting the VPN up and running. Your IT department should know this and either address it and provide guidance on what to do.

The VPN service bandwidth issues often follow a fix of capacity issues, as the resolution of capacity issues increase the use – and thus may exhaust the available bandwidth. I’ve found that monitoring available bandwidth in LANs – and even internet uplinks don’t always happen and the IT department supporting the VPN service may not monitor his and be aware of issues.

Mitigation may include – don’t use streaming services (Spotify, Netflix, Youtube) while connected to the VPN service (and make this a general policy for all users) – most VPNs route all traffic through them including the streaming traffic.

If the VPN service causes latency which degrades the experience to an unacceptable level, the first option should be to look into mitigating this by optimizing the local network (as described in the first section).

The double mute

Most video conference systems allow you to mute yourself. Your headset, speaker or computer probably also have a mute button available. I always suggest using both – always.

The “hardware” mute often has a clear indication, it’s engaged. I’m mostly using a speaker for meetings. It has a red dotted ring on the center lit up when the mute is engaged – and people needing to say a brief message during a meeting, can see the indication and know what is said will not be broadcast to the meeting.

The reason I also use the mute in the video conferencing software is that any sound generated from your laptop will most likely be routed into the conference. Using this mute, the many notification sounds are not played in the meeting (new mail, new slack message, incoming VoIP calls) and if you do concurrent research for the meeting auto-playing full-blast sound ads will not disrupt the meeting.

Beware of the background

When adding video to the call beware of the background. If you have an empty wall behind you it’s probably fine, but that’s rarely the case. Make sure that people walking about in the background are aware they may be part of the video conference, make sure that items displayed in the background are fine – particular when working from home (personal stuff) or at the office (secret plans on whiteboards).

Changing the background

Making the desired physical changes to the background is always preferable, as you have complete control of the result – changing the angles, removing stuff from view, locking doors in the background for entry or what else may be suitable measures.

If this isn’t feasible, video conferencing like Zoom and Microsoft Teams, have the ability to create a virtual green screen and change the background using their software. Before you do this, consider the following:

  • The virtual green screen does require compute power and may cause a heavy load on your computer.
  • Don’t use animated backgrounds. Adding animated background causes even more load on your system.
  • Use a “suite-able background”. While palm beaches, your favourite cartoon carters or other artwork may be nice and entertaining, it’s also distracting. Make it boring and suitable – like a bland office or a simple pattern – and in most cases, the generic blur background in Microsoft teams is a nice default choice.

Prepare for the meeting – before the meeting

Still talking technical here, but here are a few common blunders you can avoid

  • Make sure you’re supplying power to the device continuously.
    It’s not cool to force a break to find a charger to keep your device going. Also when running on battery, your device often have “undesirable behaviour” (including dimming the screen and often limiting performance in other ways).
  • Reboot the device.
    Your IT department may have pushed some updates you haven’t installed yet, and if so they often seem to like to force a reboot within a certain timeframe – which may be during the meeting.
  • Check the charge
    If you’re using wireless devices with your video conference (like keyboard, mouse, headset, touchpad) – check their charges and make sure to recharge them before the meeting as applicable.

Gear updates

Most modern laptops, phones and tables do have the capability to support video conferencing at a reasonable level. If you do find yourself in a lot of conferences, there are a few investments you may consider to upgrade the experience. Here are a few suggestions:

  • Get a speaker with a microphone built for online meetings.
    My Sennheiser SP20ML has a microphone and a speaker. This allows me to use both hands on the keyboard, drinking coffee or whatever may be needed during meetings. It has echo-cancelling built-in and a few hardware buttons to control volume, hang up and answer calls – and mute.
  • Get a dedicated microphone (almost any USB mic will do).
    A dedicated microphone will often remove a lot of the background noise and improve the general audio quality.
  • Get a webcam.
    Most computers and tablets have built-in camera, but often the most basic webcams can upgrade the video quality significantly – and it also allows to find the best possible angle and avoid the “looking up your nose” view.
My work from home setup in 2003

Automatic MacOS shutdown

From time to time my Mac is doing stuff which takes quite awhile. Converting images, converting videofiles between formats or other stuff, which may take a long time (but reasonable predictable).

In those cases I run a little command in the terminal, to automatically shut down the Mac upon completion:

sudo shutdown -h +120

This command sets a timer which shutdown the machine after two hours (the 120 parameters being after 120 minutes).

Cancelling shutdown

The command has no UI but sets a time. If you need to cancel the shutdown, simply write:

sudo killall shutdown

… and this will cancel the shutdown (if any is set). It does not tell if the shutdown timer was killed or not, but you can verify it by these commands:

sudo -i
if ps -C shutdown > /dev/null; then echo "Shutdown is pending"; else echo "Shutdown is not scheduled"; fi

This command will tell you if a shutdown is pending or not. As these are unix commands they should also work on any linux server if needed.

No access to *.dev sites

I’ve been having an odd issue for a couple of months. When accessing sites having a .dev domain (like most recently go.dev), I my browsers have given me warnings and as many had HSTS-headers, not allowed me to visit the site.

It seemed like a strange error, and I’ve tried to remember if I’ve set up some proxy or VPN connection, that could cause this issue. A few times I’ve asked others on the net if they had issues – which was not the case – and I’ve tried using a web proxy, and everything worked. Yet no matter which browser I used it didn’t work.

I did try to see if it might be a DNS issue (in the local /etc/hosts file) or anywhere else, but no luck.

Today the issue was finally solved. Examing the certificate by clicking the “Not secure” in the address bar, the certificate turned out to be a anything.dev certificate (as in “*.dev”), and that eventually provided the clue I needed.

Apparently at some point – long before the dot dev (.dev) domain existed as an actual valid domain namespace, I setup *.dev as a local development namespace – and created a self-signed certificate to allow HTTPS-based development environment for my local domains.

I had long since removed the /etc/hosts entry which sent all *.dev names to localhost but wasn’t aware for the self-signed certificate and it lingered on for years. As most modern sites now use HSTS headers, this caused an issue and I was finally able to identify the issue, launch “keychain access” on my iMac and delete the self-signed certificate which was used for all *.dev sites.

Ubuntu 16.04 to 18.04 TLS…

The site went offline a few hours today. Sorry.

It turns out Ubuntu once again changed a major component and the upgrade path didn’t work as it should to keep the lights on after the upgrade.

I’ve been updating the security settings on the server all around, and one of the things I wanted to do was adding TLSv1.3 support (and nothing before TLSv1.2). For that I needed, it seemed the best option to push forward the Ubuntu server version to the newer LTS version (18.04) and as part of this get a newer NGINX with TLSv1.3 support. That part worked sort of great.

Turns out, however, that Ubuntu switched to Netplan in the new LTS and the migration – on my server completely broke all network connectivity and it had no working network.

Being at DigitalOcean made it easy to get back to the server using the (web) Console from the Web Dashboard for the server, and start looking around. I failed to read the release notes but (ab)using friends from the office, I eventually figured out, it was the NetPlan adoption which did not move the existing interfaces configuration forward, which caused issues.

Building a YAML configuration file was fairly easy, once the issue was identified, but what a bad experience – particularly googling for details on how the IPv6 configuration should be setup was interesting.

Anyway eventually the network was configured for IPv4 and IPv6, and here I am back again.

Crawl and save a website as PDF files

The web is constantly changing and sometimes sites are deleted as the business or people behind it moves on. Recently we removed a few sites as we were doing maintenance and updates on the many sites we run at work. Some of them had interesting content – for personal or professional reasons, and we wanted to make a static copy of the sites before deleting the sites completely.

I have not found any easy, simple and well-working software, which can produce and an all-inclusive downloaded copy of a website (including all resources sourced from CDNs and 3rd party sites (to actually make them browsable offline). As I needed to make the copy reasonable fast, I choose to try to capture the contents of the site (a text/article heavy site) as PDFs.

My solution was to (try to) crawl all links on the site (to pages on the site) and feed all the URLs to a browser for rendering and generating a PDF.
This is a rough overview of what it took.

Crawling the site, finding links

Go seems an interesting language and as it seems the Cooly package was suited to help do the job – and actually does most of the work, and the script needed (which found 500+ pages on the site I crawled) looks something like this (in Go – 1.8):

package main
 
import (
    "fmt"
    "os"
    "github.com/gocolly/colly"
)
 
func main() {
    // Instantiate default collector
    website := os.Args[1]
 
    c := colly.NewCollector(
        colly.AllowedDomains(website),
    )
 
    // On every a element which has href attribute call callback
    c.OnHTML("a[href]", func(e *colly.HTMLElement) {
        link := e.Attr("href")
        c.Visit(e.Request.AbsoluteURL(link))
    })
 
    c.OnRequest(func(r *colly.Request) {
        fmt.Println(r.URL.String())
    })
    c.Visit("https://" + website)
}

It assumes the site is running HTTPS and takes the domain name (a FQDN) as the first and only parameter and should be piped into a file, which will have the complete list of all URLs (one URL on every line). Run the script without piping to a file to see the output on STDOUT and validate it seems to work as expected.

Printing a PDF from each URL on the site

Next step is to generate a PDF from a URL. There are a few different options to do this. My main criteria were to find something which could work as part of a batch job as I had hundreds of URLs to visit and “PDF’ify”. Google Chrome supports doing the job – like this (from the shell):

	google-chrome --headless --disable-gpu --print-to-pdf=output.pdf https://google.com/

This line should generate a PDF file called output.pdf of the Google.com front page.

Putting it all together

So with the above to pieces in place, the rest is just about automating the job which a small batch job was put together todo:

#!/bin/bash
go1.8 run crawler.go example.com > example.com.txt
 
for url in $(cat example.com.txt); do
	filename=${url//\//_}
	filename=${filename/\?//_}
	filename=${filename//:/_}
	filename=${filename//\//_}
	google-chrome --headless --disable-gpu --print-to-pdf=$filename.pdf $url
done

This is a rought job. The filenames of the generated PDF files are based on the original URL, but not pretty and could probably be much nicer with a little tinkering, but with a few hours playing around, I had a passable copy of the hundres of pages on the website as individual PDFs.

Linux – No space left on device, yet plenty of free space

My little server ran into an issue, and started reporting the error:

No space left on device

No worries, lest figure out which disk has full and clean up…

Using the df command with the -h (for human-readable output) it should be easy to find the issue:

root@server:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 483M 0 483M 0% /dev
tmpfs 100M 3.1M 97M 4% /run
/dev/vda 20G 9.3G 9.4G 50% /
tmpfs 500M 0 500M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 500M 0 500M 0% /sys/fs/cgroup
cgmfs 100K 0 100K 0% /run/cgmanager/fs
tmpfs 100M 0 100M 0% /run/user/1000

Strange. Notice who the /dev/vda is 50% fillled and all other disk devices seems to be finde too. Well after a little digging, thinking and googling, it turns out device space consists of two things – space (for data) on the device and iNodes (the stuff used to mange the space – where the data go – simplified).

So next move was to look at the inodes:

root@server:~# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 123562 375 123187 1% /dev
tmpfs 127991 460 127531 1% /run
/dev/vda 1310720 1308258 2462 100% /
tmpfs 127991 1 127990 1% /dev/shm
tmpfs 127991 3 127988 1% /run/lock
tmpfs 127991 18 127973 1% /sys/fs/cgroup
cgmfs 127991 14 127977 1% /run/cgmanager/fs
tmpfs 127991 4 127987 1% /run/user/1000

Bingo – no iNodes left on /dev/vda – “too many files in the file system” is the cause, and that’s why it can’t save any more data.

The cleanup

Now, I did not expect the server in this case to have a huge number of files, so something must be off.

Finding where the many files to a little digging too. Starting with this command:

du --inodes -d 1 / | sort -n
</div>

It lists who many iNoes are consumed by each directory in the root.

The highest number was in /var, and next step was doing:

<div id="foo">
du --inodes -d 1 /var | sort -n

Until I found the folder where an extreme number of files was consumed and solved the issue(*).

*) Turned out to be PHP sessions files, which ate the space, which there is an easy solution for.

DNSSEC and switching nameservers

I’ve switched nameservers for all my domains yesterday. During the past many years I’ve been free-riding on GratisDNS and enjoying their free DNS service (and luckily never needed support in their forums).

Yesterday I switched to Cloudflare and I’m using them for DNS for this (and other domains). I don’t have any particular requirements, and the switch was mostly easy and automated to the extent possible. Two domains went smooth, but the last my mahler.io domain went a stray a few hours during the switch.

The issue was completely on me and required a help from a friend to resolve. Most my DNS records are completely basic, but I’ve tried to keep a current baseline and supported CAA records and DNSSEC.

CAA does not matter when switching DNS servers, but the DNSSEC does. As the name implies, DNSSEC is a DNS SECurity standard, and in the particular case, the DNSSEC records did not only exist at gratisdns, but also at NIC.io my DNS registrar for my dot io domain.

Only as the DNSSEC was removed at gratisdns – and nic.io – the transfer went through and everything was running smoothly at the Cloudflare DNS service.

Updates…

It’s been quiet here for a while, but be things have been happening behind the scenes. In case your wondering the site (and surroundings) have been seeing a number of updates which eventually may make it into separate posts.

  • I’m running on a Digital Ocean droplet. It was provisioned as an Ubuntu 12.04 LTS, which is dead by now (as in no more updates including security updates). The server has now been roll up to an Ubuntu 16.04 LTS in place.
  • As I was messing around with the server, I’ve added IPv6 support.
  • The DNS has been updated to have full support for DNSSEC.
  • My Let’s Encrypt Certificates now has automated certificate renewals and I’ve upgraded to CAA support.
  • The Webserver has been switched from Apache to NGINX.
  • The PHP has been switched from PHP 5.6 series to a modern 7.0.
  • I’m adopting full Git-backed backup of all server setup and configuration using BitBucket.org. It’s not complete but most config files have been added and managed using GitHub.

These was the majority of changes on the site and server the past few months. With these updates in place, I might get back to producing content for the site.

Devops: You build it; you run it… sort of

DevOps seems to be sweeping through IT departments these years and for most developers it seems to be sen as a way of getting those pesky gatekeepers from Operations away and ship code whenever any developers feels like it.

The problem is however, that in the eagerness to be a modern DevOps operation, the focus is often solely on the benefits of faster releases (on the short term) the “DevOps” provide over “Dev to Ops”, and many developers do seem to forget the virtues Operations (should) bring to the party.

From my observations here are the top three fails when adopting DevOps:

  1. Too must focus on features, less on foundation. Often Operations is making sure that operating system, libraries and other components utilized by the system is updated for security and end of life. As these tasks does seem to provide “obvious value” for the users of the system prioritizing them seems to be a challenge (unless the developers find new cool features in the new version of a framework they want to use, naturally).
  2. Lack of monitoring tools. Making sure you don’t run out of system resources – be that disk space, memory or CPU – is boring. The same goes for customer support tools, diagnostic tools and other tools which forecast operational issues. As those tool belong to Operations, sure they can’t be important in DevOps and are often skipped or haphazard at best.
  3. No plan for handling incidents. As developers tend to move forwards and rarely lack confidence and thus the plan for handling incidents and operational issues is usually made ad-hoc when the issues occur. During daytime, when everyone is available this may not be a significant issue, but during nights, weekends and holidays, finding the right developer who can help often causes the incident to last longer and in some cases even worse if eager developers make changes in a part of the code they aren’t familiar with.

I do firmly believe that DevOps is the right way to build and manage IT systems, but I also find, that too many teams forget the Ops part and doesn’t incorporate the skills brought to IT from Operations minded people, and the potential to build better systems through an DevOps setup is thus often not fully realized.

(This post originally appeared on Linked)

Have your IT systems joined Social Media?

No, your servers should (probably) not have a facebook profile, nor should your servicebus have a twitter profile, but as the work tools change and evolve, you should probably consider updating the stream of status mails to more modern “social media” used at work.

When you’re in DevOps you probably get a steady stream of emails from various systems checking in. It may be alert emails, health checks or backup completed emails. It’s been more “fun” getting these mails with the rise of unlimited mail storage and powerful email-search tools should you ever need to find something in the endless stream of server-generated mails.

As we’ve adopted new tools, the automatic messaging from the servers has more or less stayed the same – until recently. We’ve been drinking the Kool-Aid with Slack, Jira and other fancy tools, and recently we considered moving the IT systems along… and so we did.

Slack has a very nice API and even with free tier, you can enable a “robot” (a robot.sh shell script that is) to emit messages on the various slack channels. We’ve integrated the slackbot into our various (automated)workflows in our Continuous integration / Continuous Deployment  pipeline, so that releases which move from environment to the next – and finally into production, emits a message to a #devops channel . We’ve also made a #operations channel, and when our monitoring registers “alert events”, it emits messages onto this channel. Using this setup anyone on the team can effectively and easily subscribe to push messages.

As a sanity measure and not to have “yet another mailbox”, we’ve tried to preach that everything on Slack should be considered ephemeral (that is short lived), and if anything worth remembering is said, it must be captured elsewhere.
releasetrain
As many other companies we use Jira to manage our software development. A task is tracked from idea to delivery and then closed and archived. As a Jira task is long-lived we’ve also integrated the same CI/CD pipeline into Jira. Once the release train starts rolling – a ticket in Jira is created (by the scripting tools through the Jira API) and updated as it pass through the environments – and closed automatically when the solution has been deployed to production.

The ticket created in Jira contains a changelog generated from pulling the contained commits from Git included in the pull request and if possible (assuming the commit comments are formatted correctly) linked to the Jira issues contained in the release (build from the pull request).

The jira tickets created by the release train is collected in a kanban board where each environment/stage have a separate column giving a quick overview of the complete state of releases (what is where right now).

A future move we’ve considered was if we should have blogging by servers. Assuming agile developers is able to create reasonable commit comments which may be readable (and comprehensible) by non-developers, it might me interesting utilizing a blogging tool such as wordpress to provide a historical log of releases.

As you adopt new tools for communication, remember to also think of automated communication, which may have a useful place in the new tools. Often new platforms have readily available APIs which allows your IT platforms to provide – or receive -information much more efficiently than pagers, email or whatever was available once you set it up in times gone by.

(This post originally appeared on Linked)