Will using `nice -20` on nginx and php-fpm make my WordPress site go faster?

The nice command in Unix and Linux systems is used to alter the scheduling priority of processes. A lower nice value means higher priority. However, using nice -20 to set the highest priority for your PHP-FPM and Nginx processes isn’t likely to have a significant impact on your WordPress site’s speed unless your server is very busy with other processes.

If PHP-FPM and Nginx are the primary services running on your server, they are already likely to be using the bulk of available CPU time. On a busy server with lots of competing processes, increasing the priority might give you a minor performance increase, but it could also starve other processes of necessary resources, leading to a less stable system overall.

If you’re looking to increase your WordPress site’s speed, you might have better luck with these approaches:

  1. Caching: WordPress plugins like W3 Total Cache or WP Super Cache can drastically improve performance by reducing the number of dynamic page loads.

  2. Optimizing your database: Regularly clean your WordPress database of unnecessary data, and consider using a database optimization plugin.

  3. Using a Content Delivery Network (CDN): This can speed up delivery of static resources like images and CSS/JavaScript files.

  4. Upgrading your hosting plan: If your site is getting more traffic than your current hosting plan can handle, upgrading to a better plan with more resources can improve speed.

  5. Optimizing images: Using a plugin to compress and optimize images can significantly speed up page load times.

  6. Keeping WordPress and its plugins up-to-date: Updates often include performance improvements, in addition to new features and security fixes.

  7. Removing unnecessary plugins: Every plugin adds some overhead to WordPress, so remove any that you don’t absolutely need.

Remember, changing the priority of a process should be done with care, as it can have system-wide effects on process scheduling and overall performance. It’s not a substitute for good application and server optimization.

Java Pros: Discover 6 Websites to Boost Your Income through Bounties

A Java programming veteran can find easy bounties on various platforms where developers can earn rewards for contributing to open-source projects, solving coding challenges, or participating in bug bounty programs.

Some of these platforms include:

1 GitHub
Explore open-source Java projects on GitHub and look for repositories with “good first issue” or “help wanted” labels. Many projects offer bounties for fixing bugs or implementing new features.

2 Gitcoin
Gitcoin is a platform that connects developers with projects offering bounties for contributions. Filter the available bounties by programming language (Java) and difficulty level to find easy tasks.

3 HackerOne
HackerOne is a bug bounty platform where you can earn rewards for finding security vulnerabilities in various software products, including those written in Java.

4 Bugcrowd
Similar to HackerOne, Bugcrowd connects developers with companies offering bounties for finding and reporting security vulnerabilities.

5 Topcoder
Topcoder is a crowdsourcing platform that hosts coding competitions, including those focused on Java development. Join the community and participate in challenges to earn cash prizes.

6 Codeforces
Codeforces is an online platform that hosts competitive programming contests. Although not strictly bounty-based, you can still sharpen your Java skills and earn recognition in the community.

Happy Bounty Hunting!

The First Human Martian: Media Giants Battle for Interplanetary Supremacy

Image

As humanity takes its first steps towards colonizing Mars, the birth and eventual journey of the first Mars-born human to Earth are destined to capture the world’s attention. This unprecedented event presents an opportunity for media powerhouses like Netflix, Apple, and HBO to engage in a fierce competition for the exclusive rights to document and broadcast the life of the first Martian. In this essay, we will explore the rivalry among these media giants and the potential impact of their ongoing series on the individual at the center of this unfolding story.

The First Martian: A Tale of Interplanetary Celebrity

From the moment the first Martian is born, they will be thrust into the global spotlight. As an interplanetary citizen, this individual will undoubtedly attract immense public interest, and media companies will be eager to capitalize on this fascination. The prospect of creating an ongoing series documenting the life of the first Martian presents a unique opportunity for media giants like Netflix, Apple, and HBO to expand their content offerings and solidify their positions as industry leaders.

The Battle for Martian Supremacy

As the race to secure exclusive rights to the first Martian’s story intensifies, media companies will deploy a variety of strategies to outmaneuver their competitors. From lavish production budgets to A-list creative teams, these companies will spare no expense to ensure their series becomes the definitive account of the first Martian’s journey. Additionally, they may attempt to forge strategic alliances with key players like SpaceX, further enhancing the appeal of their programming.

However, this high-stakes battle raises numerous ethical concerns. The relentless pursuit of ratings and viewership may overshadow the need to protect the privacy and emotional well-being of the first Mars-born human. Moreover, the commercialization of their story may reduce their unique experiences to mere entertainment, potentially undermining the importance of their groundbreaking journey.

Navigating Ethical Boundaries in a New Frontier

As media giants vie for the chance to tell the first Martian’s story, it is essential to recognize the ethical implications of their endeavors. The life of the first Mars-born human should not be commodified at the expense of their dignity and autonomy. Rather, media companies must carefully consider the potential impact of their series on the individual at the heart of this narrative.

In the face of intense competition, it is crucial for these media behemoths to prioritize empathy, integrity, and respect in their portrayals of the first Martian. By doing so, they can provide audiences with a captivating and educational glimpse into the life of an interplanetary citizen while preserving the humanity of the individual involved.

Conclusion: A Delicate Balance Between Entertainment and Ethics

The birth and journey of the first Mars-born human will undoubtedly serve as a catalyst for fierce competition among media giants such as Netflix, Apple, and HBO. As they battle for the exclusive rights to create an ongoing series, the ethical implications of commercializing the first Martian’s story must not be overlooked. While the prospect of interplanetary fame and viewership is enticing, the preservation of the individual’s humanity and well-being must remain the primary concern. In this new era of space exploration, we must find a delicate balance between entertainment and ethics, ensuring that the first Martian’s story is told with both wonder and respect.

Mass delete Github Workflow Run Logs with this script

Github workflow doesn’t allow mass deletion of Workflow Action run logs, it takes 2 clicks to delete each run log.

If you wanted to delete hundreds of these, the only way is to script something.

Luckily you can do so using the gh Github Command Line Tool and some json parsing using the jq tool.

Requirements
gh – brew install gh
jq – brew install jq

This is how a successful run is supposed to look like, it will delete 30 logs at the time:

Controlling Dopamine Levels with Gut-Based Nanobots: A New Approach to Parkinson’s Disease and Depression

Download PDF

Today, I spent nearly 3 hours writing a science fiction paper on nanobots that can synthesize hormones in the human gut. I used a tool called ChatGPT, which is a large language model trained by OpenAI. It was able to assist me in writing a comprehensive and detailed paper that covered various aspects of the technology, including the engineering and legal sides.


I was amazed at how quickly ChatGPT was able to generate content and how it was able to accurately incorporate the various details and technical terms that I provided. It was almost as if I had a team of Ph.D candidates and postdoc researchers working alongside me, helping me to write the paper.

As I was working on the paper, I couldn’t help but think about how powerful this technology could become in the future. With more advanced versions of ChatGPT, it is not hard to imagine a world where researchers are able to write high quality, factual papers in just a few hours. This would allow for a tremendous acceleration of knowledge and progress, as predicted by Kurzweil’s law of accelerating returns.

Overall, my experience with ChatGPT was extremely positive and I can’t wait to see what the future holds for this technology. It has the potential to revolutionize the way we conduct research and advance humanity’s understanding of the world.

Also, this blog post was created with ChatGPT.

What is the Rust equivalent to Java’s PrintWriter?

In Rust, the equivalent of Java’s PrintWriter is the std::io::Write trait, which is implemented by a number of types that can be used to write data to an output stream, such as a file or a network socket.

To use Write to write text to an output stream, you can use the write_all method, which takes a byte slice as an argument and writes it to the output stream.

You can convert a string to a byte slice using the as_bytes method.

Here is an example of how you might use Write to write text to a file:

use std::fs::File;
use std::io::Write;

fn main() -> std::io::Result<()> {
    let mut file = File::create("output.txt")?;
    file.write_all(b"Hello, world!")?;
    Ok(())
}

If you want to use a buffered writer, similar to PrintWriter, you can use the BufWriter type from the std::io::BufWriter module.

This type wraps a Write implementation and buffers the output, improving performance by reducing the number of calls to the underlying write operation.

Here is an example of how you might use BufWriter to write text to a file:

use std::fs::File;
use std::io::{BufWriter, Write};

fn main() -> std::io::Result<()> {
    let file = File::create("output.txt")?;
    let mut writer = BufWriter::new(file);
    writer.write_all(b"Hello, world!")?;
    writer.flush()?;
    Ok(())
}

You can also use the writeln! macro from the std::fmt module to write a line of text to an output stream.

This macro takes a Write implementation and a format string as arguments, and writes the formatted string to the output stream followed by a newline character.

Here is an example of how you might use writeln! to write a line of text to a file:

use std::fs::File;
use std::io::Write;

fn main() -> std::io::Result<()> {
    let mut file = File::create("output.txt")?;
    writeln!(file, "Hello, world!")?;
    Ok(())
}

The difference between a Slice and an Array in Rust

In Rust, a slice is a reference to a contiguous section of a larger data structure, such as an array or a vector.

It is represented using the syntax &[T], where T is the type of the elements in the slice.

A slice does not own the data it refers to, it just provides a way to access the data in the original data structure.

An array, on the other hand, is a fixed-size data structure that owns a contiguous block of memory.

It is represented using the syntax [T; N], where T is the type of the elements in the array and N is the size of the array.

An array is stored on the stack, so it has a fixed size that must be known at compile time.

One key difference between slices and arrays is that slices are dynamically sized, while arrays have a fixed size.

This means that you can create a slice that refers to a portion of an array, but you cannot create an array that refers to a portion of another array.

Here is an example that demonstrates the difference between slices and arrays:

let arr = [1, 2, 3, 4, 5];
let slice = &arr[1..3]; // slice contains the elements [2, 3]

Another difference between slices and arrays is that slices are more flexible and can be used with a wider range of functions and data structures. For example, you can pass a slice as an argument to a function, whereas you would have to pass an array by reference.

Slices are also more efficient to work with in certain cases, because they do not require the overhead of allocating and deallocating memory.

In general, slices are the more commonly used data type in Rust because they are more flexible and easier to work with than arrays. However, there are cases where using an array may be more appropriate, such as when you need to allocate a fixed-size data structure on the stack for performance reasons.

How to build your Docker image using the same Dockerfile regardless of the host architecture

Problem

If you are now using docker on a Mac M1 (arm64 platform), you don’t want to use amd64 as the architecture for your Linux Images.

You could have 2 lines on your Dockerfile and comment each one depending on where you’re building the image

Dockerfile

# Building on Apple Silicon host
FROM --platform=linux/arm64 ubuntu:20.04

# Building on Intel/x86_64 host
#FROM --platform=linux/amd64 ubuntu:20.04

Eventually this becomes very annoying.

Solution

You can pass a build time argument when you invoke docker build

Put this on your Dockerfile:

ARG BUILDPLATFORM
FROM --platform=linux/$BUILDPLATFORM ubuntu:20.04

On your docker_build_image.sh script:

export BUILDPLATFORM=`uname -m`
docker build --build-arg BUILDPLATFORM=${BUILDPLATFORM} -t myimagename .

Here’s my first very rough draft of “LifeTips”.

Free photos of Coffee
https://github.com/gubatron/LifeTips#life-tips

It is a short manuscript with actionable tips to live a better life, it’s there primarily as a manual for my kids when I die.

Advice is grouped into 6 sections:
TIME
BODY
MIND
MONEY
WORK/LEADERSHIP (Entrepreneurship)
PEOPLE

Most needed and valuable feedback is welcome.

If you have corrections, additions, feel free to Create an Issue, or just Fork the project, make a fix and send a pull request for review.

All contributors will be credited in the contributors page.

How to resize AWS ec2 ebs root partition without rebooting in 3 steps

Go to the AWS EBS dashboard and modify the volume size. Might be good to create a snapshot of it for safety but haven’t really failed ever doing this.

# 1. Check the device of your partition
$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 28.1M 1 loop /snap/amazon-ssm-agent/2012
loop1 7:1 0 97M 1 loop /snap/core/9665
loop2 7:2 0 55M 1 loop /snap/core18/1880
loop3 7:3 0 71.3M 1 loop /snap/lxd/16100
xvda 202:0 0 25G 0 disk
└─xvda1 202:1 0 20G 0 part /
xvdf 202:80 0 1T 0 disk /mnt/ebs/frostwire-files
xvdg 202:96 0 16G 0 disk /mnt/ebs/oldroot

# 2. Grow the partition
$ sudo growpart /dev/xvda 1
CHANGED: partition=1 start=2048 old: size=41940959 end=41943007 new: size=52426719 end=52428767

# 3. Extend the file system
$ sudo resize2fs /dev/xvda1
resize2fs 1.46.5 (30-Dec-2021)
Filesystem at /dev/xvda1 is mounted on /; on-line resizing required
old_desc_blocks = 3, new_desc_blocks = 4
...

# Done, new size is reflected with df
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 25G 19G 5.6G 78% /