The fix for that was to disable Compton shadow in general or exclude Zoom Share Frame (“cpt_frame_window”) in the Compton shadow config.
E.g. in .config/compton.conf
1 | shadow-exclude = [ |
But since Zoom version 5.9.3 the problem came back again because Zoom renamed the name of Zoom Share Frame to “cpt_frame_xcb_window”. So that it can be adjusted the same way.
1 | shadow-exclude = [ |
In the last caddy post I mentioned how to setup HTTP3, however the syntax has changed slightly.
1 | { |
Simple reverse proxy (to a specific port)
1 | matrix.rmsol.de { |
Serving a static page (html/css). In that case with gzip and extended Cache-Control.
1 | rmsol.de { |
Serving a PHP website.
1 | ip-whois.de { |
1 | nextcloud.rmsol.de { |
I already wrote a short post about the Zoom dunst module. Here is just a slightly adjusted version for polybar.
To integrate the module, just add the snippet to your polybar modules and adjust the path to the zoom.sh script.
1 | [module/zoom] |
The zoom.sh file looks like that
1 |
|
The service module is an easy integration of your Linux service, which you can enable/disable/toggle. E.g. I use it often to enable/disable the OpenVPN service or other frequently toggled services.
It can be turned on/off just by clicking on the name of the service.
To integrate the module, just add the snippet to your polybar modules and adjust the path to the service.sh script.
1 | [module/service] |
1 |
|
You can identify the audio device you would like to control with:
1 | cat /proc/asound/cards |
The output looks like that:
In my case, I was interested in the USB Audio Device called “C-Media”, which you will also find in the script itself. (Just take the first word, of your device name, from the second row)
The volume can be easily controlled by scrolling up/down over the icon.
1 |
|
It came handy that Caddy v2 was released. At that time I already wanted to test the first version, but I didn’t have the chance. Therefore I decided to use Caddy v2 and have been running it for 3 months. And I must say I like it.
Check more features on the caddy website.
1 | { |
It is much cleaner than Nginx.
PS: you can migrate your Nginx config file to Caddy over config adapters
I also enabled HTTP 3, it is still experimental (not just in Caddy, but in general).
Your browser need also to support it.
Normally I use the OpenVPN client for a VPN, but this time I decided (because I was using Gnome) to use the built-in VPN manager of Gnome. However, I encountered a few problems:
When you try to import the .opvn file, you get an error message saying “Error: the plugin does not support import capability”. Because Gnome Network Manager does not support OpenVPN out of the box.
You need to install the OpenVPN plugin for the network manager by:
1 | sudo apt install network-manager-openvpn-gnome |
After installation of the plugin, I still got the same error message. But I could import it over the shell with (replace the filename with yours):
1 | sudo nmcli connection import type openvpn file <filename> |
But the import was still not successful. It was failing on a line with “route”. The route remote_host 255.255.255.255 net_gateway default. That seems to be a common problem.
I just commented it out with #. Like: # route remote_host 255.255.255.255 net_gateway
I could finally import the OpenVPN configuration. But I could not enable it. The slider went directly back to “off”.
I could solve it by set the Password to Store the password for all users.
I developed this as a blocklet for my i3wm (tiling window manager) status bar called i3blocks. It tries to find a specific window on your desktop, called “as_toolbar”, which represents that you share your screen.
If the window could be found, the dunst service will be paused and you will be notified over dunst that “Sharing mode” is activated. Upcoming dunst messages will be not removed so that by stop sharing your screen, all the messages will be shown.
It also shows a small bell and slashed bell icon (in my case) in the status bar. You can change it to any icon or text you like.
PS: Most likely the icon will shown as in your editor
I named it “zoom.sh”
1 |
|
1 | [zoom] |
In my case, I simulated/virtualized a webcam. That means that with the help of v4l2 I could send a video stream (or an image) to a virtual/simulated webcam. And every app that can normally address a webcam (e.g. Zoom or Skype) played my video what I sent to the webcam.
Here are some of the commands I found useful.
With v4l2loopback we can virtualize a webcam.
1 | sudo apt install v4l2loopback-dkms |
With v4l2-ctl we can control the video4linux drivers. (It is in the v4l-utils package)
1 | sudo apt install v4l-utils |
With ffmpeg we can stream our video to the webcam
1 | sudo apt install ffmpeg |
Let’s load the kernel module v4l2loopback, which creates a virtual video device.
1 | sudo modprobe v4l2loopback |
If you need multiple video devices, you can load them as follow
1 | sudo modprobe v4l2loopback devices=2 |
The video devices will appear under /dev/video[0-9].
You can check with v4l2-ctl, which id your video devices have
1 | v4l2-ctl --list-devices |
You can stream a video (input.mov in that case) to your virtualized webcam
1 | ffmpeg -re -i input.mov -f v4l2 /dev/video0 |
If you would like to send your video in a loop
1 | ffmpeg -stream_loop -1 -re -i input.mov -f v4l2 /dev/video0 |
If you would like just send a picture to your virtualized webcam.
1 | ffmpeg -loop 1 -re -i input.jpg -f v4l2 -vcodec rawvideo -pix_fmt yuv420p /dev/video0 |
If you would like to stream your desktop. Very handy if the application you use doesn’t support screen sharing.
1 | ffmpeg -f x11grab -framerate 25 -video_size 1920x1080 -i :1 -f v4l2 /dev/video0 |
In my case the input is “:1”, this could be different e.g :0.
You can identify your display by echo $DISPLAY
You can identify your resolution with resolution: xrandr -q
If you have multiple screens, you can just add the resolution you have to the display e.g. :1+1920
You can clone e.g. your real webcam (/dev/video0) to multiple devices. In that case /dev/video1 and /dev/video2
That is handy if an application is blocking a device (because it will be used), you can bypass it by cloning the webcam and use it in another application the cloned video device.
1 | ffmpeg -f v4l2 -i /dev/video0 -f v4l2 /dev/video1 -f v4l2 /dev/video2 |
video1 and video2 need to be created. E.g. sudo modprobe v4l2loopback devices=2
ffplay is a simple media player, which use the FFmpeg library
1 | ffplay /dev/video0 |
If the stream you playing is a bit laggy (that was in my case) you can tune it with:
1 | ffplay /dev/video0 -fflags nobuffer -flags low_delay -framedrop |
mpv is a free (as in freedom) media player for the command line.
1 | mpv /dev/video0 |
For this purpose, I looked at the statistics from DE-CIX in Frankfurt, which is the second largest exchange point worldwide in terms of peak traffic (source: Wikipedia).
If we look at the growth from the starting point till the 14 march 2020 (almost a whole year), we see a growth by around 1400~ gigabits (in DE-CIX Frankfurt).
And if we compare between 14 march 2020 and 24 march 2020, we see a growth around 1200~ gigabits.
]]>That means, that in about 10 days (in reality even less) the internet average grew almost as it normally does in a whole year.
This means a hard time for the VPN servers, which have to transfer the entire traffic from your PC to the internal tools, online services and back to your PC.
In order not to overload the VPN servers, you don’t want to transfer data intensive services like Youtube over the VPN. Also, you don’t want to send data flow sensitive services like audio/video communication tools over the VPN (unless you want the additional encryption).
One solution to this could be an route exception for such services. So that the services will bypass the VPN and will be directly routed to the internet.
If your are using OpenVPN, you can easily add routes to bypass VPN for a specific domain.
Just open your openvpn client config (.opvn or .conf filetype) and add your bypass/route you like.
E.g.: just add follow lines after the line:
route remote_host 255.255.255.255 net_gateway
1 | route youtube.com 255.255.255.255 net_gateway # For youtube |
Another handy OpenVPN client config is “auth-user-pass”, which allows you to save username and password for the VPN. So that you don’t need to input your username/password every time you enable the VPN (E.g. if you have VPN in autostart).
Just add follow line to the config
1 | auth-user-pass pass.txt |
And create a file named “pass.txt” next to the OpenVPN config file. The content of the pass.txt file is just:
1 | yourusername |
Main reason for that was to identify which files drain the traffic by asking me, do I need to move the libs to CDN to improve the performance.
But I was surprised by something else: Bots/Crawlers.
I mean all of us know, that the internet will be constantly scanned by crawlers like Google or Bing, and by malicious bots.
But I didn’t expect, that my blog was scanned around 300 times a day by malicious bots.
Here are just few examples from my access logs:
Uploading a File with “Upload” Scalar isn’t a big problem if you are using library like graphql-upload. But if you like just to try/simulate it with Insomnia, you need to know what happens under to hood.
In fact, you will send a multipart form request to the GraphQL server.
It is split in three parts (be careful, naming is important):
1 | { |
1 | {"0":["variables.file"]} |
]]>Note: operations and map has the type “Text (Multi-line)” and not “File”.
My question was: Which impact does it have on performance? In case, we use logging with string concatenation.
So, I wrote a short script to “benchmark” it (and yes, I know it is not a proper benchmark).
Compared was: logging and concatenation with ‘+’ vs formatting with % vs f-strings, over multiple runs in a different order of the benchmark functions. So that, we can at least exclude the cold start issue.
If we take an average of all measuring points and compare them with each other, we would come to the result:
formatting with % | concatenation with + | f-strings |
---|---|---|
7.087545455 sec | 7.588454545 sec | 7.707727273 sec |
100% | 107.06% | 108.75% |
That means that, formatting with % is 8.75% faster than with f-string or 7.06% faster than with concatenation. At least in the combination with “deactivated” logging.
The reason for that is probably, because the string will be concatenated anyway, even if we don’t log anything. But in case of formatting with %, the string will be not concatenated.
1 | import logging |
Here are some highlights which are mostly from PEP 8, but also from other best practices:
Avoid one letter variables like x and y (Except in very short code blocks). Give your variables and functions a meaningful name and use follow naming convention:
Variables, functions, packages and modules names should be:
1 | lower_case_with_underscores |
Classes and Exceptions
1 | CamelCase |
Protected methods
1 | _single_leading_underscore |
Private methods
1 | __double_leading_underscore |
Constants
1 | ALL_CAPS_WITH_UNDERSCORES |
Comparisons to singletons like None should always be done with is or is not, never the equality operators.
1 | if user is None: # Use that |
For sequences, (strings, lists, tuples), use the fact that empty sequences are false. Don’t use the len() function.
1 | if not seq: # Use that |
Don’t compare boolean values to True or False using ==
1 | if greeting: # Use that |
Use spaces, instead of tabs (your IDE can probably convert tabs automatically to spaces). Taking 2 or 4 spaces are both fine. Just agree a convention in your team. Never mix the indentations.
Put all imports at the top of the file with three sections, each separated by a blank line, in this order:
That makes it clear where each module is coming from. Also Imports itself should usually be on separate lines.
Pylint is a Python static code analysis tool which looks for programming errors and help to enforce the rules of PEP 8. Most of the IDE’s have an integration of pylint. Personally I use the TensorFlow code style guide, which is based on PEP 8 and Google Python Style Guide. Just download the pylintrc file, which contains all the rules, from e.g. TensorFlow and point your pylint to that file.
]]>Shows package version information for npm, jspm, dub and dotnet core
Easily launch multiple shell configurations in the terminal.
This extension allows matching brackets to be identified with colours. The user can define which tokens to match, and which colours to use.
Differences between v1 and v2? v2 Uses the same bracket parsing engine as VSCode, greatly increasing speed and accuracy. A new version was released because settings were cleaned up, breaking backwards compatibility.
This extension colorizes the indentation in front of your text alternating four different colors on each step. Some may find it helpful in writing code for Nim or Python.
It helps you to navigate in your code, moving between important positions easily and quickly. No more need to search for code. It also supports a set of selection commands, which allows you to select bookmarked lines and regions between bookmarked lines. It’s really useful for log file analysis.
Language extension for VSCode/Bluemix Code that adds syntax colorization for both the output/debug/extensions panel and \.log files.*
Highlight TODO, FIXME and other annotations within your code.
You can also create your own annotations
This extension will display inline in the editor the size of the imported package. The extension utilizes webpack with babili-webpack-plugin in order to detect the imported size.
A basic spell checker that works well with camelCase code.
The goal of this spell checker is to help catch common spelling errors while keeping the number of false positives low.
Visual Studio Code plugin that autocompletes filenames.
]]>ESP’s are great to give your sensors the possibility to extract the sensor data, because of the on-board WIFI chip, which can connect to your network.
Imagine you have multiple sensors, and you would like to access the measured data and show it in Home Assistant. You can either spin up a web server on your ESP and pull the data or send data from your ESP to your Home Assistant.
The advantage of the push strategy is power saving through deep sleep mode of ESP, which will be explained later.
In my case the + and - pin label was incorrect and I had to swap them. So that the orange wire is attached to + (and not -).
The temperature sensor will be connected with the ESP over the GPIO’s. The ESP itself will send over WIFI the data (temperature) to your Home Assistant MQTT (Message Queuing Telemetry Transport). The Mosquitto broker Add-on (MQTT) need to be activated on your Home Assistant, in Add-on Store Overview. Don’t forget to set the username and password in MQTT config.
WIRE | PIN |
---|---|
yellow | G (ground) |
orange | 3V3 |
cyan | A0 |
Don’t forget to set WIFI SSID, WIFI Password and also MQTT Server IP, username and password in the code.
1 |
|
To see the temperature in Home Assistant web interface, we have to add the sensor to configuration.yaml like that:
1 | # Sensors |
It will look like that in your interface:
ESP provides an excellent power saving feature through the deep sleep mode.
Deep sleep mode turns off everything except the RST pin, which is required to turn the ESP on again.
The following steps will be done within the code:
]]>You need an additional wire from RST to D0 (purple wire in my case) to have that feature work.
In my example, I will show you how to turn on/off lights by using Teckin Switches with help of IFTTT. The result in Home Assistant will like that:
Another, and probably the easiest way is to use Smart Life components is by using Tuya component.
Just Install Tuya app, create an account, and configure your Smart Life components.
It is similar to the Smart Life app.
Additionally add the tuya config to your configuration file:
1 | tuya: |
IFTTT is a great project, where you can connect your apps and services with each others.
Check the IFTTT discovery page, to find out which services/apps are supported and there applets.
At first we connect IFTTT with Smart Life here.
As next, we connect IFTTT with Home Assistant. You can find setting up steps here.
Repeat the steps for the Turn off state.
You can test it in Home Assistant, by sending the event to the IFTTT service.
1 | {"event": "big_light_switch_on"} |
Note that it takes up to 3 seconds (till it will take effect).
The Configuration for the lights/switches looks like that: (in my case I have two light/switches)
Don’t forget to replace the IFTTT key.
1 | # IFTTT Configuration |
Here is an example of, how you can create a Card for your lights:
]]>I bought the “D1 Mini” model of ESP8266 with 4MB flash (you can get them e.g. here).
Note: It is not the WeMos D1 Mini, but a copy of it. The PIN diagram is the same, which you can find here.
There are many Firmware/SDKs for the ESP8266. My favorites are:
You can find many great use cases for your ESP8266 (DIY Projects):
If you are used to Arduino IDE, you can use it for ESP8266 as well.
Install the Arduino IDE (if not already installed)
Go to File –> Preferences and add the link http://arduino.esp8266.com/stable/package_esp8266com_index.json to the Additional Boards Manager URLS.
Go to Tools –> Board –> Boards manager
Select WeMos D1 R1 under Tools –> Board
ESP8266 library provides many examples e.g. mini web server or let your LED blink.
Just to test your ESP, you could directly deploy/upload an example from Arduino IDE to your ESP.
Let’s take Blink example:
Connect your ESP with USB to your PC and click in IDE on the “Upload” button in the top-left corner.
You should see a blinking ESP LED.
JDBC Driver:
1 | com.mysql.jdbc.Driver |
JDBC Url:
1 | jdbc:mysql://{{hostname}}:{{port}}/{{dataBaseName}}?user={{username}}&password={{password}} |
JDBC Driver:
1 | org.mariadb.jdbc.Driver |
JDBC Url:
1 | jdbc:mariadb://{{hostname}}:{{port}}/{{dataBaseName}}?user={{username}}&password={{password}} |
JDBC Driver:
1 | org.postgresql.Driver |
JDBC Url:
1 | jdbc:postgresql://{{hostname}}:{{port}}/{{dataBaseName}}?user={{username}}&password={{password}} |
JDBC Driver:
1 | com.microsoft.sqlserver.jdbc.SQLServerDriver |
JDBC Url:
1 | jdbc:sqlserver://{{hostname}}:{{port}};databaseName={{dataBaseName}};user={{username}};password={{password}} |
JDBC Driver:
1 | oracle.jdbc.driver.OracleDriver |
JDBC Url:
1 | jdbc:oracle:thin:{{username}}/{{password}}@//{{hostname}}:{{port}}/{{serviceName}} |
JDBC Driver:
1 | oracle.jdbc.driver.OracleDriver |
JDBC Url:
1 | jdbc:oracle:oci:{{username}}/{{password}}@//{{hostname}}:{{port}}/{{serviceName}} |
JDBC Driver:
1 | com.ibm.db2.jcc.DB2Driver |
JDBC Url:
1 | jdbc:db2://{{hostname}}:{{port}}/{{dataBaseName}}:user={{username}};password={{password}} |
JDBC Driver:
1 | com.ibm.db2.jdbc.net.DB2Driver |
JDBC Url:
1 | jdbc:db2://{{hostname}}:{{port}}/{{dataBaseName}}:user={{username}};password={{password}} |
JDBC Driver:
1 | com.sap.db.jdbc.Driver |
JDBC Url:
1 | jdbc:sap://{{hostname}}:{{port}}/?database={{dataBaseName}}&user={{username}}&password={{password}} |
JDBC Driver:
1 | com.informix.jdbc.IfxDriver |
JDBC Url:
1 | jdbc:informix-sqli://{{hostname}}:{{port}}/{{dataBaseName}}:INFORMIXSERVER={{serverName}};user={{username}};password={{password}} |
JDBC Driver:
1 | org.hsqldb.jdbc.JDBCDriver |
JDBC Url:
1 | jdbc:hsqldb:mem:{{dataBaseName}} |
JDBC Driver:
1 | org.hsqldb.jdbc.JDBCDriver |
JDBC Url:
1 | jdbc:hsqldb:file:{{dataBaseName}} |
JDBC Driver:
1 | org.h2.Driver |
JDBC Url:
1 | jdbc:h2:mem:{{dataBaseName}} |
JDBC Driver:
1 | org.apache.derby.jdbc.EmbeddedDriver |
JDBC Url:
1 | jdbc:derby://{{hostname}}:{{port}}/{{dataBaseName}};create=true;user={{username}};password={{password}} |
JDBC Driver:
1 | com.teradata.jdbc.TeraDriver |
JDBC Url:
1 | jdbc:teradata://{{hostname}}:{{port}}/database={{dataBaseName}},user={{username}},password={{password}} |