Tech Junction https://techjunction.co/ Technology Meets Business Sun, 28 Apr 2024 18:30:57 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.4 https://i0.wp.com/techjunction.co/wp-content/uploads/2020/07/cropped-TechJunctionBlueBG.jpeg?fit=32%2C32&ssl=1 Tech Junction https://techjunction.co/ 32 32 214868520 Installing MongoDB Shell (mongosh) to Connect to Your UniFi Network Controller Database Running on Linux https://techjunction.co/installing-mongodb-shell-mongosh-to-connect-to-your-unifi-network-controller-database-running-on-linux/?utm_source=rss&utm_medium=rss&utm_campaign=installing-mongodb-shell-mongosh-to-connect-to-your-unifi-network-controller-database-running-on-linux https://techjunction.co/installing-mongodb-shell-mongosh-to-connect-to-your-unifi-network-controller-database-running-on-linux/#respond Sun, 28 Apr 2024 18:15:02 +0000 https://techjunction.co/?p=11986 I have been supporting a project that implemented a WiFi hotspot solution using Ubiquiti Access Points and a centralized, self-hosted UniFi Network Controller running on

The post Installing MongoDB Shell (mongosh) to Connect to Your UniFi Network Controller Database Running on Linux appeared first on Tech Junction.

]]>
I have been supporting a project that implemented a WiFi hotspot solution using Ubiquiti Access Points and a centralized, self-hosted UniFi Network Controller running on Linux for a while now. The controller has a nice web portal here you can pretty much manage your APs; Adopt them, reconfigure them, monitor connections, or configure a hotspot. It’s been a smooth ride until I ran into some issue that might require accessing the backend database and taking care of things from the CLI. If you want to learn more about installing a self-hosted UniFi controller on Linux, you can read my previous detailed article here.

UniFi controller utilizes the MongoDB and accessing it requires a CLI tool called “mongosh” which will be our focus for this article. Am doing this exercise on Linux Mint version 21.1, which belongs to the distro family of Debian. To get the mongosh working on Linux, I followed the steps below:

Go to MongoDB official website and select “Linux x64” as the platform (select according to the platform you are running if it’s listed in the drop down, otherwise, Linux x64 is more general if your distro flavor is not listed or if you are not sure of the Linux platform that you are running.). Under Package, select “tgz” and click “Download” or “Copy Link”.

You can then upload the file to your Linux server using SFTP or WinSCP. Alternatively, you can click on “Copy link” and use WGET utility on your Linux server to fetch the file. I used the latter:

Note

Before taking this direction of installing mongosh using the “.tgz” file, I tried installing it with “apt-get install mongosh” but it failed with Error: “Unable to locate package mongosh”.

Navigate to the directory where you uploaded the .tgz archive (if you used the method of first downloading to your PC and uploading using SFTP or WinSCP) or where you downloaded the archive (if you used WGET with the URL). I downloaded mine in “/tmp” directory. Use the “tar” command to unpack the archive:

The command below will unpack the “mongosh-2.2.5-linux-x64.tgz” archive:

# tar -zxvf mongosh-2.2.5-linux-x64.tgz

The command below is used to make the “mongosh” executable:

# cd mongosh-2.2.5-linux-x64/
# chmod +x bin/mongosh

There are two ways to go about this; You can copy the files in “mongosh-2.2.5-linux-x64/bin” into any directory in your system environment PATH or you can use the method of creating symbolic links to a directory in your system environment PATH. For this exercise I used the former where I copy the two binary files into my system environment PATH.

Use the commands below to copy the binaries into any of the environment PATH directory:

# echo $PATH
# cd /tmp/mongosh-2.2.5-linux-x64/bin/
# cp mongosh /usr/local/bin/
# cp mongosh_crypt_v1.so /usr/local/lib/

Before you test connecting to your MongoDB using the mongosh, ensure that your MongoDB service is running.

The command below is used to check the status of MongoDB service

# service mongod status

You can now test connection to your MongoDB using “mongosh”

The command below will connect to your MongoDB deployment on your server:

# mongosh --port 27117

Some useful commands to work with UniFi Network Controller MongoDB:

1.) List all databases:
show dbs
2.) Switch to db ace:
use ace
3.) List all APs on the controller:
db.device.find({},{site_id:"",ip:"",name:"",mac:""})
4.)Find AP by MAC address:
db.device.find({"mac":"70:a7:41:db:54:58"})
5.) List all the sites on the controller
db.site.find()
6.) Lists all collections:
show collections;
7.) Prints all users on the controller:
db.admin.find()

Please let me know in the comments of any new commands and tricks that you have used with UniFi MongoDB.

About the Author

Joshua Makuru Nomwesigwa is a seasoned Telecommunications Engineer with vast experience in IP Technologies; he eats, drinks, and dreams IP packets. He is a passionate evangelist of the forth industrial revolution (4IR) a.k.a Industry 4.0 and all the technologies that it brings; 5G, Cloud Computing, BigData, Artificial Intelligence (AI), Machine Learning (ML), Internet of Things (IoT), Quantum Computing, etc. Basically, anything techie because a normal life is boring.

The post Installing MongoDB Shell (mongosh) to Connect to Your UniFi Network Controller Database Running on Linux appeared first on Tech Junction.

]]>
https://techjunction.co/installing-mongodb-shell-mongosh-to-connect-to-your-unifi-network-controller-database-running-on-linux/feed/ 0 11986
Just Resolved vSphere Client Web UI Error – “No Healthy Upstream” https://techjunction.co/just-resolved-vsphere-client-web-ui-error-no-healthy-upstream/?utm_source=rss&utm_medium=rss&utm_campaign=just-resolved-vsphere-client-web-ui-error-no-healthy-upstream Tue, 02 Apr 2024 10:16:00 +0000 https://techjunction.co/?p=11941 I recently ran into this error “no healthy upstream” while trying to access the VMware vSphere Client web UI. All efforts to restart the vCenter

The post Just Resolved vSphere Client Web UI Error – “No Healthy Upstream” appeared first on Tech Junction.

]]>
I recently ran into this error “no healthy upstream” while trying to access the VMware vSphere Client web UI. All efforts to restart the vCenter services, rebooting the VM hosting the vCenter, running the health check with “lsdoctor” did not yield any joy! And after days of troubleshooting, I finally landed on the source of my troubles. It was the DNS configuration! Apparently, the IT Team had changed the infrastructure DNS, they had swapped the old DNS(s) with new DNS(s), and that meant that the DNS IPs had changed. Checking in the vCenter Server Management Network Settings, I still had the old DNS IPs in the config, I edited these to the new DNS IPs and that did the magic.

Click on “Networking”

Allow a few minutes for all the services to start and for the vSphere Client Web Server to initialize, and then try accessing the UI again, should work fine.

About the Author

Joshua Makuru Nomwesigwa is a seasoned Telecommunications Engineer with vast experience in IP Technologies; he eats, drinks, and dreams IP packets. He is a passionate evangelist of the forth industrial revolution (4IR) a.k.a Industry 4.0 and all the technologies that it brings; 5G, Cloud Computing, BigData, Artificial Intelligence (AI), Machine Learning (ML), Internet of Things (IoT), Quantum Computing, etc. Basically, anything techie because a normal life is boring.

The post Just Resolved vSphere Client Web UI Error – “No Healthy Upstream” appeared first on Tech Junction.

]]>
11941
Master ZTE ZXR10 Software with These Tips and Tricks for Routers and Switches https://techjunction.co/master-zte-zxr10-software-with-these-tips-and-tricks-for-routers-and-switches/?utm_source=rss&utm_medium=rss&utm_campaign=master-zte-zxr10-software-with-these-tips-and-tricks-for-routers-and-switches Thu, 29 Feb 2024 09:31:04 +0000 https://techjunction.co/?p=11852 Introduction This article is intended to help you work with the ZTE ZXR10 software on routers and switches. The ZTE ZXR10 software is a unified

The post Master ZTE ZXR10 Software with These Tips and Tricks for Routers and Switches appeared first on Tech Junction.

]]>
Introduction

This article is intended to help you work with the ZTE ZXR10 software on routers and switches. The ZTE ZXR10 software is a unified operating system that runs on various ZTE routers and switches, such as the ZXR10 M6000 series, the ZXR10 T8000 series, and the ZXR10 8900E series. It provides rich features and functions for network management, security, reliability, and performance optimization.

In this article, you will learn how to access the ZTE ZXR10 software and enter the configuration mode, how to use the basic and advanced commands and parameters to configure the system settings and the specific features and functions, and how to use the diagnostic commands and tools to monitor and troubleshoot the network performance and issues.

To follow this article, you need to have the basic knowledge and skills of IP networking and routing protocols, such as TCP/IP, VLANs, OSPF, BGP, ISIS and MPLS. The tips and tricks are organized in three different categories; Basic Configuration, Advanced Configuration, Monitoring and Troubleshooting.

Learn how to access the routers and switches, enter the configuration mode, use the basic commands and parameters to configure the system settings, such as the hostname, the password, the IP address, and the routing protocol.

Entering Configuration Mode:

R1#configure t
Enter configuration commands, one per line. End with CTRL/Z.
R1(config)#

Static Default Route in a VRF (Virtual Routing and Forwarding) [e.g. Gateway = 10.10.10.1]

R1#configure t
R1(config)# ip route vrf TEST 0.0.0.0 0.0.0.0 10.10.10.1 name Put_Route_Label_Here

Learn how to use the advanced commands and parameters to configure the specific features and functions of the ZTE ZXR10 software, such as the VLAN, the QoS, the firewall, and the VPN.

Access Control List (ACL) to manage which IP subnets are allowed SSH access to your system:

1. Enter Configuration Mode:
# configure t

2. Create the ACL:
(config)# ipv4-access-list SSH_ACCESS_MGT
rule 10 permit 10.0.0.0 0.255.255.255
rule 20 permit 172.16.0.0 0.15.255.255
rule 30 permit 192.168.0.0 0.0.255.255
rule 100 deny any

3. Exit the ACL config Level:
(config-ipv4-acl)# exit

4. Apply the ACL to the SSH server:
(config)# ssh server access-class ipv4 SSH_ACCESS_MGT

5. Commit and exit the configuration
(config)# commit
(config)# exit

7. Save the configuration persistently:
# write

Access Control List (ACL) to manage which IP subnets are allowed SNMP access to your system:

1. Enter Configuration Mode:
# configure t

2. Create the ACL:
(config)# ipv4-access-list SNMP_ACCESS_MGT
rule 10 permit 10.0.0.0 0.255.255.255
rule 20 permit 172.16.0.0 0.15.255.255
rule 30 permit 192.168.0.0 0.0.255.255
rule 100 deny any

3. Exit the ACL config Level:
(config-ipv4-acl)# exit

4. Apply the ACL to the SNMP-Server:
(config)# snmp-server access-list ipv4 SNMP_ACCESS_MGT

5. Commit and exit the configuration
(config)# commit
(config)# exit

7. Save the configuration persistently:
# write

Learn how to use the diagnostic commands and tools to monitor and troubleshoot the network performance and issues, such as the ping, the traceroute, the show, and the debug commands.

View Running Configuration:

R1# show running-config
R1# show running-config | ?
begin - Begin with the line that matches
count - Show total count
exclude - Exclude lines that match
first - Show begin of output only
ignore-case - Ignore case when matching letters
include - Include lines that match
last - Show end of output only
one-line - Show table item in one line

Viewing all the VRFs configured on the router:

R1# show ip vrf brief

Viewing the routing-table specific to a VRF

R1# show ip forwarding route vrf TEST_VRF

Viewing the Logs:

# show logfile

Port Mirroring on a Switch using SPAN (Switch Port ANalyzer) a useful feature during troubleshooting that allows you to copy the traffic from one or more source ports or VLANs to a destination port for analysis (e.g using WireShark) or for monitoring. This example is tested on ZXR10 8902E switch and supports up to four span sessions, and each session can have multiple source ports or VLANs and one destination port. In this example, the traffic of interest is on port: gei-0/0/0/1 and the laptop collecting the traffic sample is connected to: gei-0/0/0/2

1. Enter Configuration Mode:
SW1# configure t

2. Define the SPAN session and give it a session ID:
SW1(config)# span session 1

3. Define the destination interface (where the laptop collecting the traffic is connected):
SW1(config-span-session-1)# default destination interface gei-0/0/0/2
SW1(config-span-session-1)# exit

4. Specify the mirrored source interfaces (traffic of interest, can be from multiple ports), both transmit (tx) and receive (rx) directions:
SW1(config)# span apply session 1 source interface gei-0/0/0/1 direction tx
SW1(config)# span apply session 1 source interface gei-0/0/0/1 direction rx
SW1(config)#commit


5. View all active SPAN sessions on the switch:
SW1# show span session all

6. View Configuration specific to SPAN sessions:
SW1# show running-config span-session

Port Mirroring on a Switch using PM-QoS traffic-mirror + ACL, tested on ZXR10 8905E Switch. In this example, the traffic of interest is to/from a specific host IP: 10.10.10.10 on port: gei-0/0/0/1 and the laptop collecting the traffic sample is connected to: gei-0/0/0/2. You can fine tune the ACL to fit your traffic of interest by applying more specific parameters like port numbers:

1. Enter Configuration Mode:
SW1# configure t

2. Create the ACL:
SW1(config)# ipv4-access-list MIRROR1
SW1(config-ipv4-acl)#rule 10 permit ip 10.10.10.10 0.0.0.0 any
SW1(config-ipv4-acl)#rule 15 permit ip any 10.10.10.10 0.0.0.0
SW1(config-ipv4-acl)#rule 20 permit ip any any
SW1(config-ipv4-acl)#exit

3. Apply the ACL to the source interface (i.e. the interface of interest connecting to the host you are investigating or monitoring, can also be applied to multiple ports):
SW1# configure t
SW1(config)# interface gei-0/0/0/1
SW1(config-if-gei-0/0/0/1)# ipv4-access-group ingress MIRROR1
SW1(config-if-gei-0/0/0/1)# exit

4. Configure the PM-QoS traffic-mirror session and specify the destination interface (i.e. the interface connected to the laptop with WireShark):
SW1# configure t
SW1(config)# pm-qos
SW1(config-pm-qos)# traffic-mirror in ipv4-access-list MIRROR1 rule-id 10 interface gei-0/0/0/2
SW1(config-pm-qos)# traffic-mirror in ipv4-access-list MIRROR1 rule-id 15 interface gei-0/0/0/2
SW1(config-pm-qos)# commit
SW1(config-pm-qos)# exit
SW1(config)#exit
SW1# write

Remember to bookmark this page and return often, as it will be updated frequently with fresh tricks and tips. If you have been working with ZTE software on routers and switches, i would love to hear from you and get more new tricks and tips.

About the Author

Joshua Makuru Nomwesigwa is a seasoned Telecommunications Engineer with vast experience in IP Technologies; he eats, drinks, and dreams IP packets. He is a passionate evangelist of the forth industrial revolution (4IR) a.k.a Industry 4.0 and all the technologies that it brings; 5G, Cloud Computing, BigData, Artificial Intelligence (AI), Machine Learning (ML), Internet of Things (IoT), Quantum Computing, etc. Basically, anything techie because a normal life is boring.

The post Master ZTE ZXR10 Software with These Tips and Tricks for Routers and Switches appeared first on Tech Junction.

]]>
11852
Let’s Secure Our Cacti Web Portal with a Free SSL Certificate From LetsEncrypt https://techjunction.co/lets-secure-our-cacti-web-portal-with-a-free-ssl-certificate-from-letsencrypt/?utm_source=rss&utm_medium=rss&utm_campaign=lets-secure-our-cacti-web-portal-with-a-free-ssl-certificate-from-letsencrypt Sat, 10 Feb 2024 15:21:44 +0000 https://techjunction.co/?p=11497 If you missed my previous article, in which I did walk you through installing Cacti Server on CentOS 9, you can read it here. Today,

The post Let’s Secure Our Cacti Web Portal with a Free SSL Certificate From LetsEncrypt appeared first on Tech Junction.

]]>
If you missed my previous article, in which I did walk you through installing Cacti Server on CentOS 9, you can read it here.

The ISP am setting this up for wants to extend Cacti Portal access to their clients so they can be able to view their bandwidth usage graphs. The sub-domain name is already setup, but it wouldn’t be wise to share with your clients a URL that will pop up the “Not Secure” warning! And hence the need to install SSL and setup HTTPS connection so the clients can see that nice little padlock in the address bar that reassures them that the “Connection is secure”. So, let’s dive in!

  • Root/Sudo access to your Linux server installation
  • A web service like Apache (HTTPD) or Nginx
  • An internet connection to your Linux server
  • A domain or sub-domain name pointing to your server public IP address

Note: am running all the commands as root, if you are not root, you will need sudo privileges to be able to execute these commands.

It’s good practice to always update your Linux OS before adding any new packages or services because some new packages might not work well with obsolete OS components. It also helps to maintain the most recent security and bug patches in your system.

The command below will update your Linux OS packages

# dnf update -y

Certbot (the tool that helps you obtain and manage SSL certs from LetsEncrypt) is not available in the default CentOS 9 repo, so we need to add the EPEL repo before we can proceed to installing Certbot.

The command below will add the EPEL repo to your Linux installation

# dnf install epel-release -y

As mentioned in step 2, Certbot is what will help us to download and manage SSL certificates from LetsEncrypt. Certbot works by using the ACME protocol, which is a standard for communicating with certificate authorities (CAs) and proving your control over a domain name. Certbot can run on your web server or on your own computer, and it can perform different types of challenges to verify your domain ownership, such as creating a file, modifying a DNS record, or answering a TLS request.

The command below will install Certbot from LetsEncrypt

# dnf install certbot -y

You can check your certbot installation with “certbot --version” command, this will return the version of certbot that has been installed on your server.

We can now use the Certbot tool to obtain a free SSL certificate from LetsEncrypt. Make sure you have an active internet connection and that you have configure the domain or subdomain and is reachable from the internet. For this example, I will use a subdomain “isp.fastnet.com

The command below will fetch SSL certificate from LetsEncrypt

# certbot certonly --standalone -d isp.fastnet.com

As you can see in the screenshot above, we ran into some small issue “Could not bind TCP port 80 because it is already in use by another process on this system (such as a web server). Please stop the program in question and then try again.”

In the error above, Certbot is complaining about not being able to bind to port 80 when trying to fetch the SSL certificates, and it’s true you can see that service “httpd” on our server is active and running, this caused the conflict with Certbot and we need to first stop “httpd” and then re-attempt Step 4. Use “systemctl stop httpd” to stop the Apache httpd webserver.

We have now successfully received the SSL certificate. Take note of the file locations for certificate and Key:

  • Certificate is saved at: /etc/letsencrypt/live/ isp.fastnet.com/fullchain.pem
  • Key is saved at:         /etc/letsencrypt/live/ isp.fastnet.com/privkey.pem

Before activating our SSL certificate, we need to install “mod_ssl” a module for Apache HTTPD server that provides support for SSL and TLS encryption and authentication. “mod_ssl” works by using OpenSSL library to implement the SSL and TLS protocols, which allow the server and the client to exchange cryptographic keys and certificates, and encrypt and decrypt the data. mod_ssl can also configure various aspects of the SSL and TLS connection, such as cipher suites, protocols, session caching, etc.

The command below will install SSL module for Apache httpd

# yum install mod_ssl

Having installed “mod_ssl”, navigate to the Apache configuration directory and edit the “ssl.conf” file to add the SSL certificate and key file PATHs.

The command below will change to Apache config directory

# cd /etc/httpd/conf.d

The command below will open “ssl.conf” for editing

# nano ssl.conf

The command below will start Apache httpd web server

# systemctl start httpd

If you have a “firewall” running on your server, make sure to allow access to HTTPS service on the public zone. For this example, we have FirewallD running on our server, and we had to add the https service in zone public and reload the firewall.

The commands below are used add HTTPS service to FirewallD and reload the rules

# firewall-cmd --permanent --add-service=https --zone=public
# firewall-cmd –reload
# firewall-cmd --list-all-zones

At this point, you are ready to test your HTTPS connection in the browser and the annoying “Not secure” warning should now be replaced by a nice “Secure” padlock icon.

The SSL certificate from LetsEncrypt is always set to expire after 90 days (3 months), so you need to always renew it. However, you can automate this task using a cron-job.

In this article, we have successfully installed SSL certificate from LetsEncrypt to secure our cacti web portal from “Not Secure HTTP” to “Secure HTTPS“. The same procedure can work for any other web application running on Apache, next time we shall test this on Nginx server. If this article has been helpful, please feel free to share it with your fellow techies in your professional circles.

About the Author

Joshua Makuru Nomwesigwa is a seasoned Telecommunications Engineer with vast experience in IP Technologies; he eats, drinks, and dreams IP packets. He is a passionate evangelist of the forth industrial revolution (4IR) a.k.a Industry 4.0 and all the technologies that it brings; 5G, Cloud Computing, BigData, Artificial Intelligence (AI), Machine Learning (ML), Internet of Things (IoT), Quantum Computing, etc. Basically, anything techie because a normal life is boring.

The post Let’s Secure Our Cacti Web Portal with a Free SSL Certificate From LetsEncrypt appeared first on Tech Junction.

]]>
11497
The Most Useful Linux Commands For Network And Systems Administrators https://techjunction.co/the-most-useful-linux-commands-for-network-and-systems-administrators/?utm_source=rss&utm_medium=rss&utm_campaign=the-most-useful-linux-commands-for-network-and-systems-administrators Thu, 18 Jan 2024 16:16:40 +0000 https://techjunction.co/?p=11119 Linux is a powerful and versatile operating system that powers many of the world’s servers and networks. As a network or system administrator, you need

The post The Most Useful Linux Commands For Network And Systems Administrators appeared first on Tech Junction.

]]>
Linux is a powerful and versatile operating system that powers many of the world’s servers and networks. As a network or system administrator, you need to master a variety of Linux commands that can help you configure, maintain, troubleshoot, and optimize your network and system performance. In this article, we will introduce some of the most crucial Linux commands for network and system administrators, such as ip, netstat, nmap, tcpdump, and more. We will explain what these commands do, how to use them, and why they are important for your daily tasks. By the end of this article, you will have a better understanding of Linux networking commands and how to use them effectively.

1. ifconfig: Used to display network interface information.

# ifconfig -a

2. ip: Used to show/manipulate routing, devices, policy routing, and tunnels.

# ip address show

3. route: Used to display or manipulate the IP routing table.

# route -n
# route add default gw 192.168.1.1
# route add -net 192.168.2.0 netmask 255.255.255.0 gw 192.168.1.1
# route del -net 192.168.2.0 netmask 255.255.255.0

4. ping: Used to send ICMP ECHO_REQUEST to network hosts.

# ping techjunction.co
# ping 4.2.2.2

5. traceroute: Used to print the route packets take to reach a network host.

# traceroute techjunction.co
# traceroute 4.2.2.2

6. netstat: Used to print active network connections, routing tables, interface statistics, masquerade connections, and multicast memberships.

# netstat -an | more

7. ss: Used to display socket statistics.

# ss -tulpn | more
# ss -s
# ss -t -a
# ss -l

8. hostname: Used to show or set the system’s host name.

# hostname

9. dig: DNS lookup utility.

# dig techjunction.co a
# dig techjunction.co ns
# dig techjunction.co mx
# dig +short A techjunction.co

10. nslookup: Used to query internet name servers interactively.

# nslookup techjunction.co
# nslookup

11. iptables: An administration tool for IPv4 packet filtering rules, forwarding, and NAT.

# iptables -L

12. tcpdump: Used to capture sample network traffic for analysis and troubleshooting.

# tcpdump -i ens224
# tcpdump -i ens224 tcp port 80
# tcpdump -A -i ens224

13. service: Used to start|stop|restart or check running status of a Linux service or daemon

# service httpd start
# service httpd status
# service httpd restart
# service httpd stop

14. telnet: Can be used to test connection to a port on a remote host

# telnet techjunction.co 80

15. scp: Secure Copy (Used to transfer files securely to a remote host).

# scp filename.txt username@remote_host_ip:/remote_host_dir

16. wget: Used to download files from the internet (Non-interactive).

# wget http://techjunction.co/file.zip

17. curl: Is a CMD tool for transferring data to or from a server using various network protocols, e.g. HTTP, HTTPS, FTP, etc. It is useful for downloading files, testing endpoints, and debugging.

# curl http://techjunction.co/api

18. iptraf: Is a Linux tool for monitoring and analyzing network traffic. It can provide detailed information about incoming and outgoing traffic, as well as a graphical representation of the data. Used to diagnose network problems, optimize performance, and monitor security.

# apt install iptraf-ng
# apt update
# iptraf

19. iftop: is a Linux tool for monitoring and analyzing network traffic. It can provide detailed information about incoming and outgoing data packets flowing through a network interface and display the total bandwidth usage.

# apt install iftop
# apt update
# iftop -i eno1
# iftop -n

20. nmap: Is a Linux tool for network exploration and security auditing. It is used for various purposes, such as scanning for open ports and discovering vulnerabilities in a network.

# apt install nmap
# apt update
# nmap -v -A scanme.nmap.org
# nmap -v -sn 192.168.0.0/16 10.0.0.0/8
# nmap -v -iR 10000 -Pn -p 80

21. lsof: is a command-line tool for listing open files in Linux. It can show you various types of files that are opened by different processes, such as regular files, directories, sockets, pipes, etc. It can also provide detailed information about each file, such as the process ID, the user, the file descriptor, the size, and more.

# lsof
# lsof -i :80

22. ethtool: Is a Linux tool for managing network interface devices. It can display and modify the parameters of the devices, such as speed, duplex, link modes, driver information, and more. It can also help diagnose network problems and optimize performance.

# ethtool ens224
# ethtool -s eth0 speed 100 duplex full

23. arp: Used to display or modify the ARP cache.

# arp -a

24. hostnamectl: Used to display the system hostname and related settings.

# hostnamectl status

25. mtr: MTR (My Traceroute) is a Linux tool for network exploration and security auditing. It combines the functionality of both the traceroute and ping commands, by sending packets to a remote host and displaying the network path and performance. It can help diagnose network problems, identify potential bottlenecks or failures, and optimize performance.

# mtr techjunction.co

26. iwconfig: Used to configure a wireless network interface.

# iwconfig

27. ncat: (or netcat) is a command-line tool for reading and writing data across network connections, using the TCP or UDP protocols. It’s used for scanning ports and testing network connectivity.

# ncat techjunction.co 8080
# ncat -l 8080
# ncat --exec "/bin/bash" -l 8081 --keep-open
# ncat -zv 192.168.1.1 22

28. ssh-keygen: Generate, manage, and convert authentication keys for ssh.

# ssh-keygen -t rsa

29. nmcli: Is a command-line tool for managing and configuring network connections on Linux systems. It can create, modify, and delete network connections, and display and control the status of network devices and connections.

# nmcli
# nmcli connection show
# nmcli dev down ens193
# nmcli dev up ens193

30. nload: Linux tool used to monitor network traffic and bandwidth usage in real time

# nload

31. iperf: A Linux tool used for measuring TCP and UDP bandwidth performance. Can be used to identify bottlenecks in the network.

# iperf -c server_ip

32: fping: Used to quickly ping multiple hosts.

# fping -a -g 192.168.1.1 192.168.1.254

33: nmtui: Text User Interface utility for controlling NetworkManager.

# nmtui

34: host: DNS lookup utility.

# host techjunction.co

JoshuaProfile

About the Author

Joshua Makuru Nomwesigwa is a seasoned Telecommunications Engineer with vast experience in IP Technologies; he eats, drinks, and dreams IP packets. He is a passionate evangelist of the forth industrial revolution (4IR) a.k.a Industry 4.0 and all the technologies that it brings; 5G, Cloud Computing, BigData, Artificial Intelligence (AI), Machine Learning (ML), Internet of Things (IoT), Quantum Computing, etc. Basically, anything techie because a normal life is boring.

The post The Most Useful Linux Commands For Network And Systems Administrators appeared first on Tech Junction.

]]>
11119
Let’s discuss Logical Volume Management (LVM) with practical examples using Ubuntu 22 https://techjunction.co/lets-discuss-logical-volume-management-lvm-with-practical-examples-using-ubuntu-22/?utm_source=rss&utm_medium=rss&utm_campaign=lets-discuss-logical-volume-management-lvm-with-practical-examples-using-ubuntu-22 Sun, 10 Dec 2023 20:28:57 +0000 https://techjunction.co/?p=9932 LVM (Logical Volume Management) is a tool for managing storage devices and partitions in Linux systems. It allows you to create logical volumes, which are

The post Let’s discuss Logical Volume Management (LVM) with practical examples using Ubuntu 22 appeared first on Tech Junction.

]]>
LVM (Logical Volume Management) is a tool for managing storage devices and partitions in Linux systems. It allows you to create logical volumes, which are flexible and resizable partitions that can span across multiple physical devices. The advantage of using logical volumes is that you can adjust the size and location of your partitions according to your needs, without having to repartition your disk or lose data. You can also create snapshots of logical volumes, which are copies of the data at a certain point in time. Snapshots can be used for backup, testing, or cloning purposes. In summary, LVM provides several advantages over the traditional partition-based method, such as:

  1. You can easily resize, extend, or reduce the logical volumes without affecting the data or the file system.
  2. You can create snapshots of the logical volumes, which are point-in-time copies that can be used for backup or testing purposes.
  3. You can use striping or mirroring to improve the performance or reliability of the logical volumes.
  4. You can add or remove physical devices to the logical volumes without disrupting the system or the users.
  5. You can use encryption or compression to enhance the security or efficiency of the logical volumes.

LVM is widely used in server disk management, as it provides more flexibility and control over the storage resources. LVM can help you to optimize the disk space utilization, improve the system performance, and simplify the backup and recovery processes. LVM can also enable you to use advanced features, such as RAID, clustering, or virtualization, on your server. To demonstrate LVM, we shall use lvm2 package installed on Ubuntu 22.

Use the command below to check if you already have lvm package on your server:

# lvm version

If lvm is not installed, use the command below to install it:

# apt install lvm2

The server am using for this exercise has 5 physical disks (sda, sdb, sdc, sde and sdd), “sda” was used during the OS installation to host the boot partition and you will notice it already has the default ubuntu logical volume: “ubuntu-lv”

Use the command below to list all the disks that are currently attached to your server:

# fdisk -l | grep -i /dev/sd

Use the command below to see which of the available disks above are mounted and in use by the file system:

# df -h

From the output above, we can clearly see that “/dev/sda2” which is a logical partition on disk “/dev/sda” is mounted on “/boot” and in-use by the file system. In this exercise, we shall create a logical volume using two free disks “/dev/sdb” and “/dev/sdc”, and later expand the logical volume using another free disk “/dev/sdd”. So, let’s get into the action:

Before using any physical disk in a Logical Volume (LV), we need to first of all define it as a Physical Volume (PV). A physical volume (PV) can be created from a whole disk or just a partition on a disk. To create a physical volumes, use the “pvcreate” command, followed by the name of the disk or partition you want to use.

Create two physical volumes on two free disks “/dev/sdb” and “/dev/sdc”:

# pvcreate /dev/sdb
# pvcreate /dev/sdc

Use the “pvs”, “pvdisplay”, or “pvscan” commands to see a summary and details of the physical volumes you have created in the step above:

# pvs
# pvdisplay /dev/sdb
# pvdisplay /dev/sdc

Notice the newly created physical volumes “/dev/sdb” and “/dev/sdc”, each with disk size of 835.75g, also notice that the newly created PV is not associated to any VG! We shall get to VGs shortly!

Before we can proceed to creating the Logical Volume (LV), we need to first put the Physical Volumes (PVs) that we created into a pool also known as a Volume Group (VG). A Volume Group (VG) is a collection of physical volumes (PVs) that creates a pool of disk space out of which logical volumes (LVs) can be allocated. The significance of a volume group is that it enables you to create logical volumes that can span multiple physical volumes, or use only a part of a physical volume. To create a Volume Group (VG), use the “vgcreate” command, followed by the name of the volume group and the physical volumes you want to include.

In this example, we are going to create a volume group named “techjunction_vg” with two physical volumes “/dev/sdb” and “/dev/sdc”:

# vgcreate techjunction_vg /dev/sdb /dev/sdc

To see the details of the volume groups we created, use the “vgs” or “vgdisplay”commands (The VG we created has a size of 835.75GB x 2 = Approx 1.64TB):

# vgs
# vgdisplay techjunction_vg

At this point, we are ready to create our Logical Volume (LV). To create a logical volume, use the “lvcreate” command, followed by the name of the Volume Group (VG) and the size to allocate.

In this example, we are going to create a logical volume named “techjunction_lv” with 1.63TB of space in the volume group “techjunction_vg”:

# lvcreate -L 1.63T -n techjunction_lv techjunction_vg

To see the details of the logical volumes, use the “lvs” or “lvdisplay” commands (Take note of the LV Path as we shall need it when formatting and mounting the LV):

# lvs
# lvdisplay /dev/techjunction_vg/techjunction_lv

To be able to use the logical volume that we have created, we need to format it, create a mount point, and mount the logical volume.

In this example, we are going to use the “ext4” file system to format the logical volume “techjunction_lv”, create a mount point “/techjunction_backups” and mount the logical volume:

# mkfs.ext4 /dev/techjunction_vg/techjunction_lv

The "ext4" is a Linux file system developed as the successor to “ext3“. It has significant advantages over its predecessor such as improved design, better performance, reliability, and new features. It can support files and file systems up to 16 terabytes in size. It also supports transparent encryption, snapshots, and data deduplication.

A mount point is simply a directory (dir) in a linux file system and we use the “mkdir” to create the directory and the “mount” command to mount the logical volume:

# mkdir techjunction_backups
# mount /dev/techjunction_vg/techjunction_lv /techjunction_backups/

Use the “df -h” command to display the new file system; notice the size, and the mount point:

# df -h

At this point we have successfully created our logical volume and made it available for use in the file system. And we can test this by changing the director to “/techjunction_backups” and creating a few text files which we can read/write on. However, there is one small step remaining and this is because the mount points created using the “mount” command are not persistent through system reboots and this is not good for a server that you are preparing for a production environment because that means that you will lose your data when the server reboots.

Data loss can occur when you mount the disk again using the “mount” command in Linux if the disk was not properly unmounted or synced before. This can happen if you remove the disk abruptly, power off the system, or encounter a system crash. When you mount a disk, the system may cache some data in memory to improve the performance and efficiency of the disk operations. However, this also means that some data may not be written to the disk immediately, and may remain in the cache until the system flushes them to the disk. If you unmount the disk without syncing the data, or if the system loses power or crashes, the data in the cache may be lost or corrupted. This can cause inconsistency or damage to the file system on the disk, and lead to data loss or errors when you mount the disk again. To prevent data loss, you should always unmount the disk properly using the “umount” command, or use the “sync” command to force the system to write all the cached data to the disk. You should also avoid removing the disk or shutting down the system while the disk is in use. To recover data from a damaged disk, you may need to use the “fsck” command to check and repair the file system.

To mount our logical volume permanently, edit the “fstab” by adding the mount point and run the “mount -a” command:

# echo '/dev/techjunction_vg/techjunction_lv /techjunction_backups ext4  defaults 0 0' | sudo  tee -a /etc/fstab
# mount -a

Note: The “tee” command is used for reading from the standard input and writing to both the standard output and a file simultaneously. The “tee -a” option means append the output to the “/etc/fstab” file instead of overwriting it.

The “mount -a” command is useful when you want to mount all the file systems that are configured in the /etc/fstab file at once, without having to specify each device or directory individually. This can save time and avoid errors when you need to access multiple file systems on your system. However, the “mount -a” command also has some limitations and risks. For example, it may fail to mount some file systems if they are not available or ready, such as network file systems or removable devices. It may also cause data loss or corruption if the file systems are not properly configured or compatible with the system.  Therefore, it is recommended to use the “mount -a” command with caution, and only when you are sure that the file systems are safe and stable to mount.

At this point, we have completed the exercise of creating a logical volume and making it ready for use by the file system. We have also ensured that this mount point configuration data is persistent throughout server reboots by editing the “fstab” file.

Now that we have successfully created our logical volume (lv)techjunction_lv” of size 1.6T, let’s test the advantage of LVM by expanding the size of this LV. But first let’s put some test directory and test files on our logical volume to make sure that our data will be preserved during this expansion exercise. In fact, if you want to experience the true beauty of LVM, you can test this on a live application. For example, an application using a database that is running from your newly created LV. And during the resizing of the LV, your application should not experience any downtime or hiccups. That’s the true potential of LVM compared to the conventional partitioning!

To show the hard disks that are not in use, we use the “lsblk" command, which can list all the block devices in the system, such as disks, partitions, and logical volumes. This command can also show the mount points of the devices that are in use.

For example, to see all the block devices, you can type:

# lsblk

From the above output, you can see that drives “sdd” and “sde” don’t have any partitions, logical volumes or mountpoints defined under them. And we can proceed to use “sdd” for our expansion exercise.

Once again, before using the physical disk “sdd” in our logical volume (LV) “techjunction_lv”, we need to define it as a Physical Volume (PV) using the “pvcreate” command, followed by the name of the disk “/dev/sdd”:

# pvcreate /dev/sdd

Next, we need to add the new physical volume “/dev/sdd” to the volume group (VG) “techjunction_vg” that contains the logical volume (LV) “techjunction_lv” using the “vgextend” command:

# vgextend techjunction_vg /dev/sdd

Next, we use the “lvextend” command to extend the size of the logical volume “techjunction_lv”. When you extend a logical volume, you can indicate how much you want to extend the volume, or how large you want it to be after you extend it.

In this example, we are going to use all the space of the new physical volume (PV) that we just added to the volume group. i.e., 837.8G:

# lvextend -l +100%FREE /dev/techjunction_vg/techjunction_lv

We are not done resizing the logical volume, in fact if you check the file system at this point, you will realize that the size change is not yet in effect!

The last step is to resize the file system on the logical volume using the “resize2fs” command:

# resize2fs /dev/techjunction_vg/techjunction_lv

As you can see from the output above, our logical volume (lv) “techjunction_lv” has been resized from 1.7T to 2.5T without having to restart the server and without losing the data that was on the existing logical volume.

Hope this article has helped you to appreciate the power of using LVM as opposed to the legacy partitioning system. However, it’s important to note that LVM is not a substitute for RAID because it does not provide any protection against disk failures. LVM and RAID are two different technologies that serve different purposes. LVM is a logical layer that allows you to create, resize, and manage partitions on your disks without being constrained by the physical layout of the disks. RAID is a physical layer that allows you to combine multiple disks into one or more arrays that provide redundancy, performance, or both. Without RAID implementation on your system, if one of the physical volumes that belongs to a logical volume fails, the logical volume will become inaccessible and the data on it will be lost. LVM does not have any mechanism to replicate or recover the data from the failed disk. RAID, on the other hand, can protect the data from disk failures by using techniques such as mirroring, striping, or parity. Depending on the RAID level, RAID can tolerate one or more disk failures without losing any data. RAID can also rebuild the data from the surviving disks to a new disk in case of a failure.

Therefore, LVM and RAID should be used together to enhance system reliability. By using RAID, you can create a reliable and performant storage layer that can withstand disk failures. By using LVM on top of RAID, you can create flexible and manageable partitions that can span multiple RAID arrays or use only a part of a RAID array. For example, you can create a RAID 1 array with two disks to provide mirroring, and then create a logical volume on top of the RAID 1 array to store your critical data. You can then resize, move, or rename the logical volumes as you wish, without affecting the RAID arrays.

CriterionLogical Volume (LV)RAID
Data AvailabilityLow, as data may become unavailable if a device fails.High, as data can be available even if one or more devices fail, depending on the RAID level and configuration.
Data IntegrityLow, as data may become corrupted if a device fails or encounters an error.High, as data can be verified and corrected using checksums, parity blocks, or mirror copies, depending on the RAID level and configuration.
Data RecoveryDifficult, as data may be lost or damaged if a device fails or encounters an error.Easy, as data can be recovered or rebuilt using the remaining devices, depending on the RAID level and configuration.
Data ProtectionLow, as data may be exposed or altered if a device is stolen or compromised.High, as data can be encrypted or authenticated using various methods, such as dm-crypt, LUKS, or MDADM, depending on the RAID level and configuration.
LVM vs RAID

JoshuaProfile

About the Author

Joshua Makuru Nomwesigwa is a seasoned Telecommunications Engineer with vast experience in IP Technologies; he eats, drinks, and dreams IP packets. He is a passionate evangelist of the forth industrial revolution (4IR) a.k.a Industry 4.0 and all the technologies that it brings; 5G, Cloud Computing, BigData, Artificial Intelligence (AI), Machine Learning (ML), Internet of Things (IoT), Quantum Computing, etc. Basically, anything techie because a normal life is boring.

The post Let’s discuss Logical Volume Management (LVM) with practical examples using Ubuntu 22 appeared first on Tech Junction.

]]>
9932
The latest developments in network engineering: Cisco, Juniper, Nokia and Google Making Headlines https://techjunction.co/the-latest-developments-in-network-engineering-cisco-juniper-nokia-and-google-making-headlines/?utm_source=rss&utm_medium=rss&utm_campaign=the-latest-developments-in-network-engineering-cisco-juniper-nokia-and-google-making-headlines Fri, 01 Dec 2023 18:42:52 +0000 https://techjunction.co/?p=9780 Network engineering is the field of designing, implementing, and maintaining computer networks that enable data communication and information exchange. Network engineering is constantly evolving and

The post The latest developments in network engineering: Cisco, Juniper, Nokia and Google Making Headlines appeared first on Tech Junction.

]]>
Network engineering is the field of designing, implementing, and maintaining computer networks that enable data communication and information exchange. Network engineering is constantly evolving and adapting to the changing needs and demands of users, businesses, and technologies. Here are some of the latest trends and innovations that are shaping the future of network engineering:

Cisco, one of the leading providers of networking products and services, has announced new certifications and training programs for network engineers who want to advance their skills and careers in the evolving network industry. The new certifications include the Cisco Certified DevNet Associate, Specialist, and Professional, which focus on the development and automation of network applications and solutions. The new training programs include the Cisco DevNet Training and Certification Program, which offers courses and exams on topics such as network programmability, DevOps, cloud, and IoT. Cisco claims that these new certifications and training programs will help network engineers to become more agile, innovative, and valuable in the network industry.

Juniper Networks, a company that specializes in network solutions and services, has launched a new platform that uses artificial intelligence (AI) to simplify and automate network operations. The platform, called Juniper Paragon Automation, is a cloud-native software suite that leverages machine learning, telemetry, and closed-loop automation to provide network operators with real-time visibility, assurance, and optimization of their network performance and service quality. Juniper Paragon Automation can help network operators to reduce operational costs, improve customer experience, and accelerate service delivery.

Nokia, a global leader in telecommunications and network equipment, and Google Cloud, a division of Google that offers cloud computing services, have announced a strategic partnership to develop and deliver cloud-native 5G network solutions. The partnership aims to combine Nokia’s expertise and portfolio in 5G network infrastructure and applications with Google Cloud’s capabilities and scale in cloud computing and artificial intelligence. The partnership will focus on three areas: cloud-native 5G core network, edge computing, and network slicing. The partnership will enable network operators and enterprises to leverage the benefits of cloud-native 5G network solutions, such as agility, scalability, efficiency, and innovation.

Daily Tech Byte

Daily Tech Byte is your source of quick and concise tech news updates. Every day, we bring you the latest and most relevant information about the world of technology, covering topics such as gadgets, apps, software, hardware, cybersecurity, artificial intelligence, and more. Whether you are a tech enthusiast, a professional, or just curious, Daily Tech Byte will keep you informed and entertained with bite-sized stories that you can read in minutes. Subscribe to our Daily Tech Byte and never miss a tech beat!

The post The latest developments in network engineering: Cisco, Juniper, Nokia and Google Making Headlines appeared first on Tech Junction.

]]>
9780
Hollow Core Fiber (HCF) Technology: A New Frontier in Optical Communications https://techjunction.co/hollow-core-fiber-hcf-technology-a-new-frontier-in-optical-communications/?utm_source=rss&utm_medium=rss&utm_campaign=hollow-core-fiber-hcf-technology-a-new-frontier-in-optical-communications Thu, 16 Nov 2023 20:33:57 +0000 https://techjunction.co/?p=9390 Optical fibers are the backbone of modern communication networks, enabling high-speed and long-distance data transmission. However, conventional optical fibers have a limitation: they guide light

The post Hollow Core Fiber (HCF) Technology: A New Frontier in Optical Communications appeared first on Tech Junction.

]]>
Optical fibers are the backbone of modern communication networks, enabling high-speed and long-distance data transmission. However, conventional optical fibers have a limitation: they guide light through glass, which slows down the light and causes signal loss and distortion. To overcome this limitation, researchers have developed a new type of optical fiber that guides light through air instead of glass. This is called Hollow Core Fiber (HCF).

HCF is a fiber that has a hollow region in the center, surrounded by a ring of glass tubes that look like a honeycomb. The glass tubes act as a mirror that reflects the light back into the hollow core, preventing it from escaping. The light travels faster and farther in the air than in the glass, resulting in lower latency, lower attenuation, and higher bandwidth. HCF can also support different wavelengths of light, such as visible, infrared, and ultraviolet, which can enable new applications and functionalities.

HCF has many potential use cases in various fields, such as telecommunications, sensing, metrology, medicine, and defense. For example, HCF can be used to improve the performance and efficiency of 5G networks, by reducing the delay and increasing the capacity of the wireless fronthaul and backhaul links. HCF can also be used to enhance the security and reliability of optical networks, by making them immune to electromagnetic interference, hacking, and physical damage. HCF can also be used to enable new optical technologies, such as quantum communication, optical computing, and laser-based manufacturing.

HCF is still a developing technology that faces some challenges, such as fabrication complexity, cost, and compatibility with existing optical systems. However, several companies and research institutes are working on advancing the HCF technology and bringing it to the market. HCF is expected to revolutionize the world of optical communication and open new possibilities in technology and innovation.

Daily Tech Byte

Daily Tech Byte is your source of quick and concise tech news updates. Every day, we bring you the latest and most relevant information about the world of technology, covering topics such as gadgets, apps, software, hardware, cybersecurity, artificial intelligence, and more. Whether you are a tech enthusiast, a professional, or just curious, Daily Tech Byte will keep you informed and entertained with bite-sized stories that you can read in minutes. Subscribe to our Daily Tech Byte and never miss a tech beat!

The post Hollow Core Fiber (HCF) Technology: A New Frontier in Optical Communications appeared first on Tech Junction.

]]>
9390
InfiniBand Technology: The High-Speed Interconnect for High-Performance Computing and AI https://techjunction.co/infiniband-technology-the-high-speed-interconnect-for-high-performance-computing-and-ai/?utm_source=rss&utm_medium=rss&utm_campaign=infiniband-technology-the-high-speed-interconnect-for-high-performance-computing-and-ai Thu, 26 Oct 2023 06:08:32 +0000 https://techjunction.co/?p=8957 InfiniBand is a high-speed interconnect technology that enables fast and efficient communication between servers, storage devices, and other computing systems. Unlike Ethernet, a popular networking

The post InfiniBand Technology: The High-Speed Interconnect for High-Performance Computing and AI appeared first on Tech Junction.

]]>
InfiniBand is a high-speed interconnect technology that enables fast and efficient communication between servers, storage devices, and other computing systems. Unlike Ethernet, a popular networking technology for local area networks (LANs), InfiniBand is explicitly designed to connect servers and storage clusters in high-performance computing (HPC) environments. InfiniBand uses a two-layer architecture that separates the physical and data link layers from the network layer. The physical layer uses high-bandwidth serial links to provide direct point-to-point connectivity between devices. In contrast, the data link layer handles the transmission and reception of data packets between devices. The network layer provides the critical features of InfiniBand, including virtualization, quality of service (QoS), and remote direct memory access (RDMA). These features make InfiniBand a powerful tool for HPC workloads that require low latency and high bandwidth.

InfiniBand has been widely adopted by the HPC community, as it powers some of the world’s fastest supercomputers and AI (artificial intelligence) systems. According to the latest TOP500 list of supercomputers, InfiniBand connects 141 systems, including two of the top five systems: Fugaku in Japan and Summit in the US. InfiniBand also supports some of the most demanding AI workloads, such as large language models, deep learning, and computer vision. For example, Microsoft uses InfiniBand to speed up the training and inference of its Turing Natural Language Generation model, which has 17 billion parameters.

InfiniBand is not only a high-performance interconnect, but also a platform for innovation and advancement. NVIDIA, which acquired Mellanox Technologies in 2020, is the leading provider of InfiniBand solutions, including adapters, switches, routers, gateways, cables, transceivers, and data processing units (DPUs).

NVIDIA has been developing new technologies and capabilities that enhance InfiniBand’s performance and functionality. For instance, NVIDIA Quantum-2 is the next generation of InfiniBand networking platform, which offers 400 Gb/s bandwidth per port, 64 Tb/s switch capacity, and advanced In-Network Computing features such as SHARP (Scalable Hierarchical Aggregation and Reduction Protocol). SHARP offloads collective communication operations to the switch network, reducing the amount of data traversing the network and increasing data center efficiency. NVIDIA also offers BlueField DPUs, which combine powerful computing, high-speed networking, and extensive programmability to deliver software-defined, hardware-accelerated solutions for the most demanding workloads.

InfiniBand is transforming the AI landscape by enabling faster, smarter, and more scalable computing. As AI applications become more complex and data-intensive, InfiniBand provides the extreme performance, broad accessibility, and strong security needed by cloud computing providers and supercomputing centers. InfiniBand also opens new possibilities for technology development, such as quantum computing, converged workflows for HPC and AI, and new interfaces and connectors. InfiniBand is not only a high-speed interconnect, but also a future-proof platform for innovation and discovery.

Daily Tech Byte

Daily Tech Byte is your source of quick and concise tech news updates. Every day, we bring you the latest and most relevant information about the world of technology, covering topics such as gadgets, apps, software, hardware, cybersecurity, artificial intelligence, and more. Whether you are a tech enthusiast, a professional, or just curious, Daily Tech Byte will keep you informed and entertained with bite-sized stories that you can read in minutes. Subscribe to our Daily Tech Byte and never miss a tech beat!

The post InfiniBand Technology: The High-Speed Interconnect for High-Performance Computing and AI appeared first on Tech Junction.

]]>
8957
The 10 Hottest Tech Careers for 2024 https://techjunction.co/the-10-hottest-tech-careers-for-2024/?utm_source=rss&utm_medium=rss&utm_campaign=the-10-hottest-tech-careers-for-2024 Tue, 24 Oct 2023 22:05:33 +0000 https://techjunction.co/?p=8892 Technology is changing the world at an unprecedented pace, creating new opportunities and challenges for businesses and individuals alike. As a result, the demand for

The post The 10 Hottest Tech Careers for 2024 appeared first on Tech Junction.

]]>
Technology is changing the world at an unprecedented pace, creating new opportunities and challenges for businesses and individuals alike. As a result, the demand for tech professionals with the right skills and knowledge is also growing and diversifying. According to the U.S. Bureau of Labor Statistics, the employment of computer and information technology occupations is projected to grow at a rate of 11% between 2023 to 2029, adding about 531,200 new jobs. Moreover, the COVID-19 pandemic accelerated the digital transformation of many industries and sectors, increasing the need for tech solutions and innovations.

So, what are the hottest tech careers for 2024? Based on the latest trends, research, and forecasts, here are 10 tech careers that are expected to be in high demand and offer attractive salaries and rewarding careers in the next few years:

1.) Software Engineer

Software engineers are responsible for designing, developing, testing, and maintaining software applications and systems that run on various devices and platforms, such as computers, smartphones, tablets, web browsers, etc. Software engineers use various programming languages and frameworks, such as Java, C#, Python, JavaScript, React, Angular, etc., to create software solutions for various domains and problems, such as web development, mobile development, cloud computing, artificial intelligence, etc. According to Glassdoor, the average salary of a software engineer in the U.S. was $107,888 by 2020.

2.) Data Scientist

Data scientists are responsible for extracting insights and knowledge from large and complex data sets using various methods and techniques, such as statistics, mathematics, programming, machine learning, etc. Data scientists use various tools and languages, such as Python, SQL, etc., to collect, clean, explore, analyze, and visualize data for various purposes and problems, such as business analytics, predictive analytics, big data, etc. According to Glassdoor, the average salary of a data scientist in the U.S. was $113,309 by 2020.

3.) Cybersecurity Analyst

Cybersecurity analysts are responsible for monitoring, analyzing, and responding to cyber threats and incidents, using various tools and techniques such as firewalls, antivirus software, IDS, IPS, PenTesting, etc. Cybersecurity analysts work on various aspects and levels of cybersecurity, such as network security, application security, cloud security, endpoint security, etc. According to the U.S. Bureau of Labor Statistics (the median annual wage of information security analysts in the U.S. was $99,730 by 2019.

4.) Cloud Engineer

Cloud engineers are responsible for designing, developing, deploying, and managing cloud applications and systems using various cloud platforms and services such as Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP) etc. Cloud engineers work on various aspects and types of cloud computing such as; infrastructure as a service (IaaS), platform as a service (PaaS), software as a service (SaaS) etc. According to ZipRecruiter the average salary of a cloud engineer in the U.S. was $118,626 by 2020.

5.) Artificial Intelligence Engineer

Artificial intelligence engineers are responsible for designing, developing, testing, and deploying artificial intelligence applications and systems using various tools and frameworks such as TensorFlow, PyTorch, Keras etc. Artificial intelligence engineers work on various domains and problems such as natural language processing (NLP), computer vision, speech recognition, machine learning, deep learning etc. According to Glassdoor the average salary of an artificial intelligence engineer in the U.S. was $114,121 by 2020.

6.) Blockchain Developer

Blockchain developers are responsible for creating, maintaining, and optimizing blockchain applications and systems using various tools and protocols such as Ethereum, Hyperledger, Bitcoin etc. Blockchain developers work on various types and aspects of blockchain technology such as smart contracts, cryptocurrencies, decentralized applications (DApps) etc. According to ZipRecruiter the average salary of a blockchain developer in the U.S. was $154,550 by 2020.

7.) Internet of Things (IoT) Engineer

Internet of Things engineers are responsible for designing, developing and integrating IoT devices and systems using various hardware and software components such as sensors, microcontrollers, Arduino, Raspberry Pi etc. Internet of Things engineers work on various domains and applications of IoT such as smart homes, smart cities, smart agriculture, smart healthcare etc. According to Indeed the average salary of an IoT engineer in the U.S. was $101,930 by 2020.

8.) DevOps Engineer

DevOps engineers are responsible for facilitating the collaboration and integration between software development and IT operations teams using various tools and practices such as automation, continuous integration continuous delivery (CI/CD), monitoring etc. DevOps engineers work on improving the efficiency and quality of software delivery and deployment by reducing errors, delays and costs. According to Glassdoor the average salary of a DevOps engineer in the U.S. was $99,604 by 2020.

9.) Augmented Reality/Virtual Reality (AR/VR) Developer

AR/VR developers are responsible for creating immersive and interactive digital experiences using various technologies and platforms such as Unity, Unreal Engine, Oculus, Vive etc. AR/VR developers work on various domains and applications of AR/VR such as gaming, education, entertainment, healthcare etc. According to ZipRecruiter the average salary of an AR/VR developer in the U.S. was $121,478 by 2020.

10.) Robotics Engineer

Robotics engineers are responsible for designing, developing, testing, and operating robots and robotic systems that can perform various tasks and functions, using various technologies and disciplines such as mechanical engineering, electrical engineering, computer science, etc. Robotics engineers work on various domains and applications of robotics such as manufacturing, agriculture, healthcare, military, etc. According to ZipRecruiter, the average salary of a robotics engineer in the U.S. was $99,040 by 2020.

These are some of the hottest tech careers for 2024 that you can consider if you are looking for a challenging and rewarding career in the tech industry. However, these are not the only tech careers that will be in demand in the future. There are many other tech careers that will emerge or evolve as technology advances and creates new possibilities and challenges. Therefore, it is important to keep learning and updating your skills and knowledge to stay relevant and competitive in the tech market.

JoshuaProfile

About the Author

Joshua Makuru Nomwesigwa is a seasoned Telecommunications Engineer with vast experience in IP Technologies; he eats, drinks, and dreams IP packets. He is a passionate evangelist of the forth industrial revolution (4IR) a.k.a Industry 4.0 and all the technologies that it brings; 5G, Cloud Computing, BigData, Artificial Intelligence (AI), Machine Learning (ML), Internet of Things (IoT), Quantum Computing, etc. Basically, anything techie because a normal life is boring.

The post The 10 Hottest Tech Careers for 2024 appeared first on Tech Junction.

]]>
8892