by Cristian Balan | May 15, 2024 | How-to, Magento
I am working on a new Magento ecommerce site and wanted to find out more on the Asynchronous Sending of Sales Emails feature. While digging into Adobe’s documentation, their Configuration best practices page or the Sales Emails don’t really explain how this works. Still puzzled as to why this isn’t the default behaviour although recommended by Adobe themselves, I found the Proper way to configure asynchronous email sending in Magento article which I thought it is worthy of sharing and saving in my bookmark. This article explains how it works and how to do it on an existing website which comes with a caveat (see SQL statements).
by Cristian Balan | Oct 31, 2023 | Hosting, How-to, Mail
After adding a new domain to the WildDuck configuration you might also want to create a DKIM key for that domain.
A new DKIM key can be created via API.
I’m using Insomnia and I have created a POST request posting to /remote-api/dkim API URL and configured the X-Access-Token
header with the authentication accessToken
value from /etc/wildduck/wildduck-webmail.toml (see the [api]
section in that file):
Then in the Query tab we can add a selector of our choice and the domain name:
Now we can press Send and copy the value (e.g. v=DKIM1;t=s;p=...
) from the Preview tab into our new TXT DNS record.
by Cristian Balan | Mar 19, 2023 | Magento
Multiple users have reported issues with Magento 2 (see GitHub magento2/issues/23618 or magento2/issues/23908) where customers are unable to proceed with their orders during checkout due to apparent problems with their shipping address.
In those cases, the customers will see a message stating Unable to save shipping information. Please check input data. OR The shipping information was unable to be saved. Verify the input data and try again.. E.g.:
Despite the screenshot above, the address might be correct and customer can’t do anything to unlock the situation hence leading to frustration.
Looking in the logs at var/log/exception.log, multiple related `Invalid customer address id` records can be noted. E.g.:
main.CRITICAL: Invalid customer address id 691 {"exception":"[object] (Magento\\Framework\\Exception\\NoSuchEntityException(code: 0): Invalid customer address id 691 at /vendor/magento/module-quote/Model/QuoteAddressValidator.php:77)"} []
Please note that the ID above is not che customer ID but the customer address ID that can be found in the `customer_address_entity` table. E.g.:
SELECT * FROM magentoDBname.customer_address_entity WHERE entity_id = 691;
This problem appears to be due to a bug occurring when particular conditions are met (see Stack Overflow). Sadly reading from those reports in GitHub and elsewhere, Adobe don’t appear to have figured out a resolution for this edge issue which seems to still affect the most recent versions.
Some code changes have been suggested by users to fix the problem while my preferred, albeit temporary, solution is that of updating the SQL code for those affected users (so we don’t change the Magento core code).
If curious, we can find the affected customers with:
SELECT entity_id, customer_id FROM quote WHERE customer_id != 0 AND customer_is_guest = 1;
We could instead find more details with:
SELECT a.entity_id, a.customer_id, b.firstname, b.lastname, b.email FROM quote a, customer_entity b WHERE a.customer_id != 0 AND customer_is_guest = 1 AND a.customer_id = b.entity_id;
And fix them with:
UPDATE quote SET customer_is_guest = 0 WHERE customer_id != 0 AND customer_is_guest = 1;
by Cristian Balan | Oct 20, 2022 | Bookmarks
Pi-hole®
Network-wide Ad Blocking. You can run Pi-hole in a container, or deploy it directly to a supported operating system via our automated installer.
Syncthing
Syncthing is a continuous file synchronization program. It synchronizes
files between two or more computers in real time, safely protected from prying
eyes. Your data is your data alone and you deserve to choose where it is stored,
whether it is shared with some third party, and how it’s transmitted…
MicroK8s vs k3s vs Minikube | MicroK8s
Feature comparison of lightweight Kubernetes distributions: MicroK8s, K3s and minikube.
Sandstorm
Take control of your web by running your own personal cloud server with Sandstorm.
Opt out of global data surveillance programs like PRISM, XKeyscore, and Tempora - PRISM Break - PRISM Break
Opt out of global data surveillance programs like PRISM, XKeyscore and Tempora. Help make mass surveillance of entire populations uneconomical! We all have a right to privacy, which you can exercise today by encrypting your communications and ending your reliance on proprietary services.
Upload and share screenshots and images - print screen online | Snipboard.io
Easy and free screenshot and image sharing - upload images online with print screen and paste, or drag and drop.
JoinPeerTube
PeerTube is a decentralized video hosting network, based on free/libre software. Join the federation and take back control of your videos!
by Cristian Balan | Dec 4, 2021 | Azure, Monitoring
I found myself in the position of having to get Azure Monitor alerts notifications into Microsoft Teams. So I thought it would be as easy as creating an Incoming Webhook in Teams and adding its URL to an Azure Action Group, right? Wrong!
After trying that in vain, I’ve ended up “googling” on this subject only to find out that the best way to achieve this is using an Azure Logic App. I’ve also seen others using Function Apps while the easiest approach of them all is that of relying on an alert management system similar to PagerDuty (etc.) or a better monitoring solution.
The only problem with the easiest approach is that the organisation needs to be willing to pay for the tool unless you’re in the fortunate position to have one already, which I wasn’t. Hence, let’s go down the complicated route, the way Microsoft likes to do things anyway (private joke).
One of the most common results when looking for a Logic App approach is this article by Bruno Gabrielli, https://techcommunity.microsoft.com/t5/core-infrastructure-and-security/azure-monitor-alert-notification-via-teams/ba-p/2507676
However, like other attempts on this subject, I don’t think that guide is stupid-proof as I still wasted hours until I’ve eventually figured out how to get it done. So I thought will write down something which I hope might help me (or someone else) if I have to do this again at a later stage.
Logic App
Let’s create a Logic App based on a Consumption plan (I’ve seen reports that this is not going to work with a Standard plan):
Once created and accessed, we are greeted by a Logic App Designer with a few proposed triggers to choose from. Let’s pick “When a HTTP request is received”.
Past the following JSON schema and click Next step (you might want other payloads depending on the type of alert you want to send to Teams):
{
"properties": {
"data": {
"properties": {
"context": {
"properties": {
"condition": {
"properties": {
"allOf": {
"items": {
"properties": {
"dimensions": {
"items": {
"properties": {
"name": {
"type": "string"
},
"value": {
"type": "string"
}
},
"required": [
"name",
"value"
],
"type": "object"
},
"type": "array"
},
"metricName": {
"type": "string"
},
"metricValue": {
"type": "integer"
},
"operator": {
"type": "string"
},
"threshold": {
"type": "string"
},
"timeAggregation": {
"type": "string"
}
},
"required": [
"metricName",
"dimensions",
"operator",
"threshold",
"timeAggregation",
"metricValue"
],
"type": "object"
},
"type": "array"
},
"windowSize": {
"type": "string"
}
},
"type": "object"
},
"conditionType": {
"type": "string"
},
"description": {
"type": "string"
},
"id": {
"type": "string"
},
"name": {
"type": "string"
},
"portalLink": {
"type": "string"
},
"resourceGroupName": {
"type": "string"
},
"resourceId": {
"type": "string"
},
"resourceName": {
"type": "string"
},
"resourceType": {
"type": "string"
},
"subscriptionId": {
"type": "string"
},
"timestamp": {
"type": "string"
}
},
"type": "object"
},
"properties": {
"properties": {
"key1": {
"type": "string"
},
"key2": {
"type": "string"
}
},
"type": "object"
},
"status": {
"type": "string"
},
"version": {
"type": "string"
}
},
"type": "object"
},
"schemaId": {
"type": "string"
}
},
"type": "object"
}
In the Next step search for Condition. Pick Control and choose Condition, select a status
dynamic content and set it is equal to Activated
.
Continue and add Teams actions (Post message in a chat or channel) to both True and False conditions (you can do the same with Slack and also send notifications to multiple channels at the same time).
Now, here you can go ahead and complete the Message with dynamic content and/or expressions.
In my case, I wanted a clickable link to the Azure alert. The URL is given by the portalLink
dynamic content, however, that doesn’t come clickable in Teams and I’ve only been able to render that clickable by editing the Logic App’s messageBody
section via the code view.
Essentially, my messageBody
in the True condition looks like this:
"messageBody": "<p>🚨 Azure <strong></strong><strong>@{triggerBody()?['data']?['context']?['name']}</strong><strong></strong><br>\n@{triggerBody()?['data']?['context']?['description']}.<br>\n<a href=\"@{triggerBody()?['data']?['context']?['portalLink']}\">@{triggerBody()?['data']?['context']?['portalLink']}</a></p>",
Then for False I simply have:
"messageBody": "<p>The Azure <strong>@{triggerBody()?['data']?['context']?['name']}</strong> is now <span style=\"color: rgb(65,168,95)\"><strong>Resolved ✔</strong></span><br>\n<a href=\"@{triggerBody()?['data']?['context']?['portalLink']}\">@{triggerBody()?['data']?['context']?['portalLink']}</a></p>",
With all that Saved, we can move on to the Alert.
Alert Rule
For this exercise, I have created a ping URL test from within an App Insights’ Availability and by default that created me an alert for it. Let’s edit this alert rule and add an Action Group:
Now you can either choose an existing Action Group or create a new one which is what I’m doing. Either way, we need to go to the Actions tab and choose Webhook.
You might be tempted to pick the Logic App instead of a Webhook and that’s what you’ll also find in some guides. However, I’ve wasted a lot of time on that and never had it to work as the alerts, although sent to Teams, were coming through empty.
Once Webhook is selected, we have to paste the “HTTP POST URL” from the first Logic App step.
Please note that you do not need to enable the “common alert schema”:
With the alert rule now saved you have it bound to an Action Group coupled with the Webhook pointing at the Logic App HTTP POST URL trigger.
So let’s do some testing.
Teams
As mentioned, in my example I’m using a simple ping URL test against an App Service. I’m going to stop the App Service and once the thresholds are reached I am expecting a notification to my chosen Teams channel.
Here we go:
Starting the App Service back should now trigger a “resolved” notification back to my Teams channel:
by Cristian Balan | Oct 26, 2021 | Hosting, How-to, Mail
WildDuck is a simple mail server solution and is often accompanied by the WildDuck Webmail service. While you can create email addresses with any domain via both the WildDuck’s API and the Webmail GUI, when it comes to aliases and the ease of use of the user interface, by default you’re only limited to the one you’ve initially configured.
To add further domains to choose from when creating alias addresses, edit the wildduck-webmail.toml
file (it is located in /etc/wildduck/
) and add your additional domains in the domains=[]
array like so:
[service]
...
# allowed domains for new addresses
domains=["oviliz.com","seconddomain.com"]
Restart the webmail service with systemctl restart wildduck-webmail
and you’re good to go.
Now if you want to also create a separate DKIM key and DNS record, follow this short guide.
by Cristian Balan | Apr 29, 2021 | Magento
A brief write-up by Absolute Commerce on building and deploying Magento with a pipeline:
https://absolutecommerce.co.uk/blog/magento-2-pipeline-deploy-zero-downtime
Had this opened in my many tabs so thought will better save it this way for future reference. We might use some inputs for our pipelines.
by Cristian Balan | Apr 18, 2021 | Linux, Networking
I was using NextDNS when I decided to get an UniFi Dream Machine (UDM) and switch to the built-in content filtering. However, I wasn’t particularly impressed with its beta feature and after using it for a few months I decided to turn back to NextDNS.
Thankfully it is possible to integrate NextDNS with the UDM router. The UniFi OS doc page is pretty useful alongside the Conditional Configuration page.
Essentially, I had SSH already enabled on the UDM so I’ve installed NextDNS with:
sh -c 'sh -c "$(curl -sL https://nextdns.io/install)"'
An error is noticed as is unable to start the service the Ubuntu (Debian) way. Well, this is UniFi OS so it can be ignored.
The prompt is awaiting for a Configuration ID to be provided but we do that below with a separate command to cover multiple IDs, so I’ve just pressed CTRL+C.
I have then set a specific configuration to match my NextDNS Configuration IDs with the different Networks on the UDM, and restarted the service:
nextdns config set -config e2h243 -config 192.168.100.0/24=sdgd12 -config 192.168.2.0/24=534567 -setup-router
nextdns restart
This way I can view the Logs in the UI for the individual LAN devices. E.g.:
The service logs are also useful to monitor after the restart:
watch -d "nextdns log"
UniFi Dream Router update: On the UDR, in order to get to see the correct hostnames in NextDNS we also did:
nextdns config set -auto-activate -report-client-info
nextdns restart
by Cristian Balan | Jan 3, 2021 | Linux, Nextcloud, OwnCloud
Install and Configure PHP 8.0
As of this writing, the Raspbian 10 repository allows installing PHP 7.3. We want 8.0 instead so let’s add the source to Ondřej Surý‘s PHP packages:
sudo wget -O /etc/apt/trusted.gpg.d/php.gpg https://packages.sury.org/php/apt.gpg
echo "deb https://packages.sury.org/php/ buster main" | sudo tee /etc/apt/sources.list.d/php.list
sudo apt update
Now we can install the latest available PHP 8.0 and the necessary modules for Nextcloud:
sudo apt install php8.0-fpm php8.0-curl php8.0-cli php8.0-mysql php8.0-gd php8.0-common php8.0-xml php8.0-json php8.0-intl php8.0-imagick php8.0-dev php8.0-mbstring php8.0-zip php8.0-soap php8.0-bz2 php8.0-bcmath php8.0-gmp php8.0-imap php8.0-opcache php8.0-apcu php8.0-redis -y
In the main php.ini file, uncomment the date.timezone
line by removing the preceding ;
and change the value with your own. Uncomment also the cgi.fix_pathinfo
line and change the value to 0
.
sudo nano /etc/php/8.0/fpm/php.ini
798c798
< ;cgi.fix_pathinfo = 1 --- > cgi.fix_pathinfo=0
962c962
< ;date.timezone = America/Denver --- > date.timezone = Europe/London
Save and exit (CTRL+X, press Y to confirm and then Enter).
Do the same in the CLI’s php.ini:
sudo nano /etc/php/8.0/cli/php.ini
Next, let’s add a pool configuration file:
sudo nano /etc/php/8.0/fpm/pool.d/nextcloud.conf
[nextcloud]
user = www-data
group = www-data
listen.owner = www-data
listen.group = www-data
listen = /run/php/nextcloud.sock
listen.allowed_clients = 127.0.0.1
pm = dynamic
pm.max_children = 50
pm.start_servers = 5
pm.min_spare_servers = 5
pm.max_spare_servers = 35
env[HOSTNAME] = $HOSTNAME
env[PATH] = /usr/local/bin:/usr/bin:/bin
env[TMP] = /tmp
env[TMPDIR] = /tmp
env[TEMP] = /tmp
php_value[session.save_handler] = files
php_value[session.save_path] = /var/lib/php/sessions
php_value[max_execution_time] = 3600
php_value[memory_limit] = 7G
php_value[post_max_size] = 7G
php_value[upload_max_filesize] = 7G
php_value[max_input_time] = 3600
php_value[max_input_vars] = 2000
php_value[date.timezone] = Europe/London
;php_value[opcache.enable] = 1
;php_value[opcache.enable_cli]=1
php_value[opcache.memory_consumption] = 128
php_value[opcache.interned_strings_buffer] = 8
php_value[opcache.max_accelerated_files] = 10000
php_value[opcache.revalidate_freq] = 1
php_value[opcache.save_comments] = 1
Restart PHP and set the service to start automatically after a server reboot:
sudo systemctl restart php8.0-fpm && sudo systemctl enable php8.0-fpm
In this step, we will install the latest MariaDB version and create a new database for the Nextcloud installation. The latest MariaDB packages are available on the repository by default so let’s install using the command below:
sudo apt install mariadb-server -y
After the installation is complete, enable the service to launch every time the system reboots:
sudo systemctl enable mariadb
Next, we will configure the MariaDB root password using the mysql_secure_installation
shell script:
sudo mysql_secure_installation
Press Enter and set a password for the root user. Type Y
for the subsequent questions unless you want to do differently:
Enter current password for root (enter for none):
Set root password? [Y/n]
Remove anonymous users? [Y/n]
Disallow root login remotely? [Y/n]
Remove test database and access to it? [Y/n]
Reload privilege tables now? [Y/n]
Create a User and Database on MariaDB for NextCloud
sudo mysql
CREATE DATABASE nextcloud;
GRANT ALL PRIVILEGES ON nextcloud.* TO nextclouduser@'localhost' IDENTIFIED BY 'CHANGEwithYOURpassword';
FLUSH PRIVILEGES;
EXIT
Let’s Encrypt certificate via Cloudflare DNS
I prefer DNS-01 challenge over HTTP-01 and as I’m also behind Cloudflare, I’m going to use the certbot-dns-cloudflare plugin on top of the certbot instructions for NGINX with snapd to generate an SSL certificate.
sudo snap install core; sudo snap refresh core
sudo snap install --classic certbot
sudo ln -s /snap/bin/certbot /usr/bin/certbot
sudo snap set certbot trust-plugin-with-root=ok
sudo snap install certbot-dns-cloudflare
Create a Cloudflare API Token with an Edit zone DNS template and choose to include the desired specific zone. E.g.:
Create a /root/.cloudflare.ini file and add your token in it. E.g.:
# Cloudflare API token used by Certbot
dns_cloudflare_api_token = 0123456789abcdef0123456789abcdef01234567
Restrict permissions:
sudo chmod 600 /root/.cloudflare.ini
Issue the SSL certificate with the command below and enter an email address to be informed if the certificate doesn’t renew automatically (i.e. there’s an issue) and answer the questions:
sudo certbot certonly --dns-cloudflare --dns-cloudflare-credentials /root/.cloudflare.ini -d nextcloud.oviliz.com --post-hook 'service nginx restart'
Do a --dry-run
renewal to verify that certbot remembers now to use the DNS challenge:
sudo certbot renew --dry-run
Install Nextcloud
sudo wget https://download.nextcloud.com/server/releases/latest.zip -P /var/www/
sudo unzip /var/www/latest.zip -d /var/www/
sudo chown -R www-data:www-data /var/www/nextcloud
sudo nano /etc/nginx/sites-available/nextcloud.oviliz.com
upstream php-handler {
#server 127.0.0.1:9000;
server unix:/run/php/nextcloud.sock;
}
server {
listen 80;
listen [::]:80;
server_name nextcloud.oviliz.com;
# enforce https
return 301 https://$server_name:443$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name nextcloud.oviliz.com;
# Use Mozilla's guidelines for SSL/TLS settings
# https://mozilla.github.io/server-side-tls/ssl-config-generator/
# NOTE: some settings below might be redundant
ssl_certificate /etc/letsencrypt/live/nextcloud.oviliz.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/nextcloud.oviliz.com/privkey.pem;
# Add headers to serve security related headers
# Before enabling Strict-Transport-Security headers please read into this
# topic first.
add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;" always;
#
# WARNING: Only add the preload option once you read about
# the consequences in https://hstspreload.org/. This option
# will add the domain to a hardcoded list that is shipped
# in all major browsers and getting removed from this list
# could take several months.
add_header Referrer-Policy "no-referrer" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Download-Options "noopen" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Permitted-Cross-Domain-Policies "none" always;
add_header X-Robots-Tag "none" always;
add_header X-XSS-Protection "1; mode=block" always;
# Remove X-Powered-By, which is an information leak
fastcgi_hide_header X-Powered-By;
# Path to the root of your installation
root /var/www/nextcloud;
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
# The following 2 rules are only needed for the user_webfinger app.
# Uncomment it if you're planning to use this app.
#rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
#rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last;
# The following rule is only needed for the Social app.
# Uncomment it if you're planning to use this app.
#rewrite ^/.well-known/webfinger /public.php?service=webfinger last;
location = /.well-known/carddav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
location = /.well-known/caldav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
# set max upload size
client_max_body_size 7G;
fastcgi_buffers 64 4K;
# Enable gzip but do not remove ETag headers
gzip on;
gzip_vary on;
gzip_comp_level 4;
gzip_min_length 256;
gzip_proxied expired no-cache no-store private no_last_modified no_etag auth;
gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy;
# Uncomment if your server is build with the ngx_pagespeed module
# This module is currently not supported.
#pagespeed off;
location / {
rewrite ^ /index.php;
}
location ~ ^\/(?:build|tests|config|lib|3rdparty|templates|data)\/ {
deny all;
}
location ~ ^\/(?:\.|autotest|occ|issue|indie|db_|console) {
deny all;
}
location ~ ^\/(?:index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|oc[ms]-provider\/.+|.+\/richdocumentscode\/proxy|.+\/richdocumentscode_arm64\/proxy|)\.php(?:$|\/) {
fastcgi_split_path_info ^(.+?\.php)(\/.*|)$;
set $path_info $fastcgi_path_info;
try_files $fastcgi_script_name =404;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $path_info;
fastcgi_param HTTPS on;
# Avoid sending the security headers twice
fastcgi_param modHeadersAvailable true;
# Enable pretty urls
fastcgi_param front_controller_active true;
fastcgi_pass php-handler;
fastcgi_intercept_errors on;
fastcgi_request_buffering off;
# Raise timeout values.
# This is especially important when the Nextcloud setup runs into timeouts (504 gateway errors)
fastcgi_read_timeout 600;
fastcgi_send_timeout 600;
fastcgi_connect_timeout 600;
}
location ~ ^\/(?:updater|oc[ms]-provider)(?:$|\/) {
try_files $uri/ =404;
index index.php;
}
# Adding the cache control header for js, css and map files
# Make sure it is BELOW the PHP block
location ~ \.(?:css|js|woff2?|svg|gif|map)$ {
try_files $uri /index.php$request_uri;
add_header Cache-Control "public, max-age=15778463";
# Add headers to serve security related headers (It is intended to
# have those duplicated to the ones above)
# Before enabling Strict-Transport-Security headers please read into
# this topic first.
#add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;" always;
#
# WARNING: Only add the preload option once you read about
# the consequences in https://hstspreload.org/. This option
# will add the domain to a hardcoded list that is shipped
# in all major browsers and getting removed from this list
# could take several months.
add_header Referrer-Policy "no-referrer" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Download-Options "noopen" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Permitted-Cross-Domain-Policies "none" always;
add_header X-Robots-Tag "none" always;
add_header X-XSS-Protection "1; mode=block" always;
# Optional: Don't log access to assets
access_log off;
}
location ~ \.(?:png|html|ttf|ico|jpg|jpeg|bcmap|mp4|webm)$ {
try_files $uri /index.php$request_uri;
# Optional: Don't log access to other assets
access_log off;
}
}
sudo ln -s /etc/nginx/sites-available/nextcloud.oviliz.com /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl restart nginx
Now, access the URL [https://(chosenDomain)/] with the web browser and configure Administrative user account as well as the database connection details as previously created. Filled the admin user and database information, complete the installation and wait until is done.
Install Redis
sudo apt install redis-server
sudo nano /var/www/nextcloud/config/config.php
Configure Nextcloud to use APCu and Redis memcaches:
'memcache.local' => '\OC\Memcache\APCu',
'memcache.distributed' => '\OC\Memcache\Redis',
'memcache.locking' => '\OC\Memcache\Redis',
'redis' => [
'host' => '/var/run/redis/redis-server.sock',
'port' => 0,
'dbindex' => 0,
'password' => 'secret',
'timeout' => 1.5,
],
Update the redis configuration in /etc/redis/redis.conf accordingly. I.e. uncomment Unix socket options and ensure the “socket” and “port” settings match your Nextcloud configuration.
Be sure to set the right permissions on redis.sock so that your webserver can read and write to it. For this you typically have to add the webserver user to the redis group:
sudo usermod -a -G redis www-data
Finally, restart Redis, NGINX and PHP-FPM:
sudo systemctl restart redis-server nginx php8.0-fpm
by Cristian Balan | Dec 13, 2020 | Linux, Magento, Monitoring
I’ve used Papertrail before its acquisition from SolarWinds and was impressed by its simple interface and the logs scrapping capabilities. At the time, I have set Papertrail to monitor a few auth logs and trigger Slack notifications whenever SSH access occurred from a different IP than those whitelisted.
Well, years have passed since I’ve last played with it and now I have a new use case to monitor Magento errors recorded in its var/log/system.log and var/log/exception.log files. As I’ve forgotten how I did this at the time I thought I’ll put down my steps which might help later on.
First thing I’ve gone straight into reading the docs. That brought me into the app log files aggregate page.
The first step is to download the latest remote_syslog2 script from their GitHub repository. The only problem I have with this method is that you will struggle to keep things up-to-date without manually checking the repo for a new version and update. Hopefully one day we would be able to install via an OS package.
Ok, let’s download the latest current version and install. As I’m using Ubuntu I’ve done this:
wget https://github.com/papertrail/remote_syslog2/releases/download/v0.20/remote-syslog2_0.20_amd64.deb
sudo dpkg -i remote-syslog2_0.20_amd64.deb
The second indicated step is that of configuring and starting remote-syslog. The example only shows you how to do it with a single log so without too much fuss I’ve instead downloaded the custom config file and replaced the content of /etc/log_files.yml with:
files:
- /home/user/myMagentoWebsite.co.uk/public/var/log/system.log
- /home/user/myMagentoWebsite.co.uk/public/var/log/exception.log
destination:
host: logs7.papertrailapp.com
port: 77777
protocol: tls
exclude_patterns:
- main.INFO
pid_file: /var/run/remote_syslog.pid
Figured out that we can add multiple files just under the first example line. Also, the original /etc/log_files.yml had an exclude_patterns
example which I’ve used to prevent main.INFO
records from filling Papertrail as these are outside the scope.
The remote-syslog GitHub page has obviously more examples and explanations if we want to dig further into how it works.
Finally, the sudo remote_syslog
command should get us set.