The software security argument why Ledger Recover is a security risk

Introduction

Ledger has rolled out a “feature” where users can store their Ledger Nano seed-phrase remotely against a payment of $10/month. The feature allows for extracting an encrypted seed-phrase of cryptocurrency outside the hardware wallet! This has led to too much controversy. Some people think it’s the cool thing to give Ledger a break after endangering people’s money to make money off its users. Besides that ledger admitted that a government Subpoena can lead to handing over cryptocurrency to government with this feature, there’s a technical argument why this is inexcusable.

It’s attractive to just say “I’m outside the hive-mind, and hence ledger did nothing wrong, or that the security model is unchanged”, but this misses the whole point of why Ledger Recover is a problem, from a software security point of view.

As a software engineer who deals with systems programming languages, there are lots of misconceptions about this whole debacle. Even the former Ledger’s chairman seems to skip this point and claim it’s just a trust issue. I’d like to explain the nuance as an expert who dealt with these issues and worked in this field for 15 years.

It’s not about trusting or not trusting ledger

Even if we trust ledger 100%, the problem I’m about to describe isn’t related to that. The problem is: We’re all humans, and human software engineers all do the same mistakes… we have over 50 years of experience in that.

All software has bugs, and that’s the problem

Over the course of humanity writing software, we learned how bugs happen, how bugs appear in the code, and how little control we have over them. The ledger software is most likely written in C, which is the language that’s most vulnerable to these kinds of issues. If you’d like to see how dumb, little bugs can lead to compromising the whole machine, I recommend PwnFunction’s video “Dangerous Code Hidden in Plain Sight for 12 years”. It’s an 18 minutes video with a great example in Linux and how a dumb mistake lead to exploits in Linux, including unprivileged root access. Doesn’t matter how arrogant software engineers are, they do dumb mistakes, because they’re human. We all do.

This problem is so bad, that there’s a (relatively) new programming language that advertises that it solves lots of the memory issues that programming languages like C and C++ easily allow. It’s called Rust, you probably heard about it. It’s not a silver bullet, but it tries to solve most of these issues… and we’re not done yet with this problem. That’s a topic for another day.

But how can an attacker exploit Ledger Recover?

We don’t know, and that’s the scary part. If we knew, we would’ve fixed it, right? But I’ll give you an introduction into how these bugs happen and how they’re exploited.

Here’s an example in the next section… in a nutshell, without going into too much details.

Programs are dumb counters, and bugs allow hackers to exploit them

When you start a program (whether in Ledger or on your PC or otherwise), the operating system maps your program to memory, and runs what’s called a Program Counter. The program is just a dumb counter that is incremented. If your system is 32-bits (CPU instructions are 4-bytes wide), and your program is mapped to start at byte number 10000 in the memory array, then the counter simply is incremented on every instruction: 10000, then 10004, then 10008, then 10012, etc. And on every increment, the CPU reads the instruction in memory and executes the instruction in there.

(for the “actually” people, I’m leaving out tons of details such as caching… but let’s keep this simple).

In all CPU architectures, there’s an instruction called the “jump” instruction. The jump instruction gives the CPU a new address in memory to “jump” to, where the program counter changes its value, so that different code is executed, like in conditional statements… “if this is larger than that, do this, or do that”.

Functions and jumps + bugs = disaster

A function, in programming, is defined as a group of instructions/functionality that’s grouped in one place in memory (again, simplification). This is one of the jobs of a jump instruction… to do “function calls”, and run a group of instructions when necessary.

What can happen is that a software has something called a buffer-overflow bug. Buffer-overflow basically gives the program access to other sections in memory it’s not supposed to access. This is, kind-of, the main cause of problems in C programming.

If you’re old enough in Cryptocurrency, remember when EOS had a bug that was discovered by a security company that allowed for remote code-execution? That was a buffer-overflow.

If a hacker discovers a buffer overflow, they can write jump instructions, for example, to force the program to call functions it’s not supposed to call… say… call the function that extracts the seed-phrase and pass it through USB? Or maybe skip the encryption function and make it an identity function that does nothing? That’s the problem.

“It’s a new attack vector”

When Ledger writes a function in the firmware that extracts the seed phrase, they basically gifted all hackers a function they can try to reverse-engineer, and see if they can abuse it due to some bug a ledger engineer created, just like all humans have been writing bugs for the last 50 years. That’s why people are saying “it’s a new attack vector”.

To Ledger staff: You’re not gods. You’re humans. Just like us. We all write bugs, and you’re not an exception.

That’s why a firmware without this functionality at all is valuable.

How to backup and end-to-end encrypt your remote server data without manually inputting a password

Introduction – The problem

The problem is the following: You run some service where you need to have regular backups. The data to be backed up can be represented as something you can put in a tarred archive. The common way to do this is to pipe the tar output to OpenSSL and encrypt with AES-256, which is fine, except that it requires inputting the password constantly or putting a static password on the server. If you’ve guessed that we’re going to use public key cryptography, you’re right, but there is a problem there.

The typical way of doing this with passwords

If you research this topic online, you’ll find the internet flooded with solutions that look like this:

$ tar -czpf - /path/to/directory-to-backup | openssl aes-256-cbc -salt -pbkdf2 -iter 100000 -pass | dd of=output.aes

There are a few problems with this… let me state them and let’s see whether we can do better:

  • You have to input the password manually for this to work, so this cannot go into a cron/periodic job unless you expose the password
  • Since you have to put the password manually, it has to be some text, which lacks entropy, given that pbkdf2 (as far as I know) uses SHA256; where if you know anything about bitcoin mining, you’ll know that SHA256 calculations have become very cheap… meaning that your password that you write with your hand, most likely, lacks entropy for security
  • It uses AES-256-CBC, which lacks authentication… but let’s not get into that since OpenSSL has support issues for the better alternative, GCM, but I’ll provide a simple solution to this at the end

Let’s see if we can do better.

How can public key cryptography solve these problems?

Introduction to public key cryptography

Feel free to skip this little section if you know what public key crypto is.

We call symmetric key cryptography, the kind of cryptography where the encryption and decryption, or signing and verifying, uses the same key. In public key cryptography, or asymmetric key cryptography, we use the public key to encrypt data, and private key to decrypt data. Or we use the private key to sign data, and public key to verify signatures.

But public key encryption doesn’t really help as a replacement

It may be a surprise, but there’s no way to encrypt large data with public keys. The reason for this is that encryption algorithms usually are either done in blocks (like in AES), or in streams (like Chacha20). Public key encryption that we have is either considered a block cipher (like RSA encryption), or is used to generate symmetric keys using some agreement protocol, like Diffie-Hellman, which then uses symmetric encryption. And given that RSA encryption is very expensive, because it’s an exponentiation of a large number, it’s not really practical to use it for large data, so no standard algorithms are found in software like OpenSSL to encrypt your backups. You’ll never see RSA-4096-CBC or RSA-4096-GCM for that reason.

How then do we encrypt without a password?

The trick is simple. We generate a truly random password, encrypt the password with RSA and store it in a file, use the password for encryption, then we forget the password and move on. The password then can only be recovered with a private key (which will be encrypted). Let’s see how we do this.

First, in your local machine where you’re OK with unpacking the data, generate your RSA keys. Generate your decryption private key (encrypted with a passphrase):

$ openssl genrsa -aes256 -out priv_key.pem 4096

and from that, generate your public key

$ openssl rsa -in priv_key.pem -outform PEM -pubout -out pub_key.pem

Now this password that you use to encrypt this key will be the only password you’ll ever type. Your backups don’t need it.

Now copy the pub_key.pem file to the remote machine where backups have to happen.

If you have a backup bash script, you can do the following now:

#!/bin/bash

# generate a random password with 256-bit entropy, so PBKDF2 won't be a problem
password=$(openssl rand -hex 64)

# tar and encrypt your data with the password
tar -czpf - /path/to/directory-to-backup | openssl aes-256-cbc -salt -pbkdf2 -iter 100000 -pass pass:${password} | dd of=output.aes

# encrypt the password using your RSA key, and write the output to a file
echo ${password} | openssl rsautl -encrypt -inkey pub_key.pem -pubin -in - -out output.password.enc

To view the password in that file, you can view it in stdout using this command:

$ openssl rsautl -decrypt -inkey priv_key.pem -in output.password.enc -out -

This will print the password to your terminal, which you can copy and use to decrypt your encrypted archive.

Finally, to decrypt your data, run the command:

$ dd if=output.aes | openssl aes-256-cbc -d -salt -pbkdf2 -iter 100000 | tar -xpzf -

and when asked, input the password you copied from the private key decryption of the command before

WARNING: It’s WRONG to use the same password and encrypt it again and again and again with RSA (with this scheme, at least). Be aware that encrypting the same password with the same public key will yield the same output. This means that someone with access to the encrypted files can recognize that you’re using the same password everywhere.

Conclusion

No need, anymore, to choose between encrypting your backups and not being there. You can have both with this simple trick. This is also more secure because the 256-bit password generated randomly can never be brute-forced through the cheap pbkdf2, which basically means that brute-forcing the encryption itself is cheaper than brute-forcing the password that derived the key, which will guarantee the 256-bit encryption strength.

Now about using CBC instead of GCM, you can create your own HMAC to authenticate your archives. Or, even easier, you can call sha256sum on your archive and encrypt the output with your public key or even sign it. That’s good enough and can be streamlined into anything you automate for your backups.

I just wanna mention also that this isn’t an innovation of mine by any means… this is called in OpenSSL the “envelope encryption”, but for some reason it’s only supported when using the C library and not when using the OpenSSL executable. This is also a common way for encryption in many applications because asymmetric encryption for data is impractical with the available tooling. I wrote this article because it seems that no one talks about this and everyone is busy typing possibly insecure passwords in their encryption archives. Hopefully this will become more common after this article.

Abandoning the server provider from 10-years: Stay away from myLoc/ServDiscount/Webtropia (review)

Introduction

It’s a hot question what server providing service to use nowadays. The industry has boomed. The only thing that matters, though, is the reliability of a provider. With reliable I mean, the continuity of the service with no interruptions, the mutual understanding that problems can happen, and the urgency to correct problems to continue operation with minimal down-time. Ten years ago, myLoc was a good company. Nowadays, they’re so cheap and bad I’m leaving them. I guess inflation didn’t just inflate the currency.

Why write this in a blog post?

Because people on the internet have become so un-empathetic, robotic, and exhausting to talk to or discuss with, and my blog will minimize that because no one cares about my blog. Maybe it’s a diary at this point. I can’t remember in the last few years when I posted something on social media, and didn’t get people licking the boot of corporations under the guise of “violating their terms” or whatever other dumb reason. As if common sense doesn’t have any value anymore. Trust isn’t of any value. I’m quite sure some genius out there will find a way to twist this story and make it entirely my fault, because humanity now evaluates moral decisions based on legality… what a joke we’ve become. Nowadays, you’re not allowed to be angry. You can’t be upset. You shouldn’t be frustrated. You can’t do mistakes. If you express any of your feelings towards an experience; your expression, regardless of how reasonable it’s, will be called a rant and dismissed. So, I guess my blog is the best place to put this “rant”, if you will. Who cares after all?!

I guess this will have a better chance to reach people than any robotic social media out there.

Story starts with bad internet connection

A few months ago I contacted ServDiscount (a subsidiary of myLoc, where my server is, IIUC) about constant internet connection problems (ticket 854255-700134-52). The internet connection problems started like a year ago, but recently got so bad (while my home’s internet is around 1 Gbps). The problem was illustrated in many ways

  • File copying through sftp/ssh, was almost impossible. The speed throttles continuously and the download stops.
  • If I use any ssh tunnel, the terminal will not be usable anymore while the tunnel is doing any transfers, even small data was enough to block it
  • http(s) downloads of large files have become impossible; a simple attempt to download a file of 400 MB can’t be accomplished.
  • Games with server software running on my server there could never be continued due to connection loss
  • ftps downloads constantly show “connection reset by peer”, which I believe is the problem with everything else. The server-side connection was just resetting all the time.

As a patient and busy person, I didn’t care. This went on for MONTHS. I’m not exaggerating. It was when this problem went out of control. Then, under the assumption that “it’s my fault”, I rented a specific VPS from ServDiscount, and guess what? Same problems. The VPS had NOTHING but a game server in it (plus iptables firewall with blocked input policy + open some ports), yet it was impossible to finish a game or two. People from all around the world would get disconnected after a while.

Bad support, all the way (starting October)

When I contacted support about this connection problem, the first blind response was “it’s my fault”. Because for this (bad) company, it can never be their fault. All their configuration is perfect. Everything they do is perfect. They can never do mistakes. That’s what pissed me off. They couldn’t explain the disconnects in the gaming server OR my original server. But of course, they claimed it might be a “botnet”… because why not? Since it’s virtually impossible to deny the presence of botnet except with plausible deniability, they can never lose this argument. Yet with the presence of the second server, it was easy to prove that something is wrong there. It was easy to reproduce the problem, since an http download would fail. That was ready to view, test and debug. So, after that pressure, some guy in their support said “I’ll take a look”, and asked for access. I took the risk and created an account for them with sudoers power and added their public key, but they never logged in. It was easily observable in the logs, for weeks. Just empty promises with zero accountability.

After nagging them, they said “they’re busy”… great support. As if I’m asking for a favor or something.

Then I got exhausted, and to appease them, I decided to wipe my server. I wish it were that easy.

Get your backups, if you can

Because of these internet issues, my backups were trapped on that server. I’ve tried to download them in many ways, but my backup’s server software were constantly resetting the connection. Keep in mind that the software was used for years in the past. It’s not like I invented these solutions recently or something. In fact, my only “invention”, because of these problems, was to make these scripts run multiple times a day… just in an attempt to get the downloads to happen. For over a month, I couldn’t download any backups. My backup logs were flooded with “connection reset by peer” errors. What the hell am I supposed to do now?

Well, I remained patient. It was a mistake. I should’ve probably escalated and gotten angry? But I don’t think they would’ve done anything. When it’s a bad company, it’s a bad company.

Docker, as a tool to easily move away from bad situations

I didn’t know too much about docker other than it’s a containerization service that I barely used. In fact I used LXD more because it’s modeled around preserving OSs, while docker is modeled around stateless preservation of data, where the software is renewed constantly, but the data is preserved.

The idea was simple: Instead of having to install all services on bare-metal and suffer when incompetent support treats me like this in migrations, I could just tar the containers with their data, move them somewhere else, and continue operation as if nothing happened. This way, I’ll save time I don’t really have, since I work now as a lead engineer and I can’t really actively solve the same problems like I did back then when I was in academia. Sad fact of life: The older you grow, the less time you have.

So, I started learning docker. Watched tutorials, read articles, how-to, and many other things. I built images, did many things, and things were going fine. I containerized almost all my services. But one service was the cause of all evil…

Containerizing email services… big mistake and lots learned

The problem with containerization is that often you’ll need a proxy to deliver your connections. You can setup your docker container to share the host’s network, but I find that ugly and defeats the isolation picture of docker. My email server was proxied through haproxy.

Unfortunately, because of proxying, and because of my lack of understanding of how that worked (because I was learning), I misconfigured my email proxy and all the incoming connections to the email server were seen as “internal” connections because the source was the proxy software from within the server’s network, which basically meant that all incoming email relay requests were accepted.

Our Nigerian friends and their prince were ready

Once that mistake was made, within a day, the server was being used as a relay for spam.

ServDiscount and their reaction

On 16.12.2022, because one email ended in a spam trap, the “UCEProtect” service flagged my server, and let’s try to guess what ServDiscount/myLoc did:

  1. Contacted me, informing me of the issue and asking to correct it
  2. Switching the server to rescue mode, so that I can investigate the issue through the logs
  3. Completely block access to my server with no way for me to access the logs to understand the problem
  4. Refuse to unblock the server (even in rescue mode) unless UCEProtect gives the stamp of approval

Yep, it’s 3 and 4.

It’s completely understandable to block my server. After all, it was sending spam. No question there. But the annoying part is that I only knew about this whole thing by coincidence, when I attempted access to my server and couldn’t. They only sent an email, which I didn’t receive because my email server was blocked. They didn’t attempt a phone-call, SMS or anything. They didn’t even open a ticket IN MY ACCOUNT WITH THEM!

You might wonder whether it can get any more careless, passive and negligent from their end. Yes, it can!

Remember that I couldn’t even download my backups because of their crappy internet connection, which they didn’t diagnose or help fix. So I’m literally now trapped with no email access, my data is locked (for who knows how long), and… it gets worse.

I contacted their customer support. What do you think the answer was?

  1. We understand. Let us switch your server to rescue mode so that you can understand what happened
  2. Remove your server from UCEProtect’s list (by paying them money) then come back

You guessed it. It’s no. 2.

And again, remember, I can’t even gain rescue access to my server. So until this point, I don’t even have an idea what happened.

I go to UCEProtect, pay them around $100 (it was a Friday). Then went back to customer support. Guess their response:

  1. Thank you for removing your server from the list. Here’s access back so that you can investigate the issue.
  2. We have zero communication with the “abuse department”, so wait for them to react… fill the form they sent you to the email that you can’t access.

You also guessed it. No. 2.

No amount of tickets, phone-calls, or begging helped. A robot would’ve understood better that I couldn’t access my email so I have no way of responding.

Whether it’s with bad policy or otherwise, they never help when you need help. They couldn’t care less. So what if your server was stopped for days or even weeks based on UCEProtect listing? We don’t care, and we won’t help you fix the problem.

DNS change

While support was completely ignoring me, it occurred to me that I could switch my email MX DNS record to another service and receive my emails. Thanks to how MTAs (mail transfer agents) work, they don’t give up so easily. On 17.12.2022, I got the email on another email provider, who I paid just to solve this problem, filled the form (without knowing why the issue happened, because again, how the hell will I know without access to my logs, even in rescue mode?!).

The holy abuse department is on the other side of the universe

So while I did everything they asked for. I filled the form, I removed my IP address from UCECrap, I didn’t get my server until days later. Because the abuse department is simply unreachable. There’s basically zero urgency to solve such a problem for a customer (who’s been with them for 10 years). Who gives a flying f**k if your server and work halted for days?! Not myLoc, apparently.

Let’s summarize

  • Bad internet connection with no help, so I couldn’t even get my backups
  • No urgency to solve a problem I obviously didn’t do deliberately, with zero value for being with them for 10 years
  • No access to server in rescue mode or VPN, yet I have to explain how the outbreak happened without logs, who knows how
  • Only contact through email, which was blocked, which they never addressed
  • Even after fulfilling their conditions, no one cares. No one helped, because the “abuse department” is on the other side of the universe.

What’s the point of having a server with a company that treats its customers like that?

What’s the point when accountability is too much to ask?

I don’t know.

Conclusion

I’m not gonna sit here and claim that I’m perfect and none of this is my responsibility. But at least I expect common sense treatment for customers, urgency to help and reasonable way to access the server to see the logs, whether in rescue mode or VPN or whatever, and a reachable “abuse department” that doesn’t hold all the keys with no accountability to customers. All this, and add that I’ve been with them for a decade.

But basically, the lesson here is, if anything goes wrong with your server in myLoc, you can go f**k yourself. No one will help you. You don’t really matter to them. Unlike other companies that will simply put your server in rescue mode and ask you to fix the problem. MyLoc will simply block you. Maybe they’re taking a cut out of UCEProtect? I don’t know. But you can only wonder why a company will be that stupid. I’m out.

Have a great one.

A simple, lock-free object-pool

Introduction

Recently, I was using Apache Thrift for a distributed high performance C++ application. Apache Thrift is an RPC for calling functions on other ends of networks and across different languages. On a single thread, using an RPC is a bad idea, if performance is the goal. So the way to get high performance from such a system is by putting multiple clients in many threads. However, the problem is that Apache Thrift’s clients are not thread-safe! What do we do?

The solution is an object pool of clients, and since the application is a high-performance, I chose it to be a lock-free object pool. Lock-free means that no mutexes are involved in the design, but only atomic variables. I couldn’t find anything out there, so I implemented my own, which I hope will be useful to others, too. While this design is lock-free, it’s not “wait-free”. Meaning that, obviously, if all objects in the pool are used, the system will block/wait until an object is available, or the programmer has to setup an asynchronous waiting mechanism. It’s possible to “try” to get an object in this design.

In the following I’ll discuss my design of this object pool.

The design

The design of this object pool is simple.

  • There’s a vector of size N that holds N initialized objects. The initialization of the objects is done in the constructor, by passing a functor/lambda that intializes every object and puts them in the vector (hence, the objects must be at least move-constructible).
  • I used a lock-free queue from boost. The queue has integers ranging from 0 to N-1. In order to check whether an object is available, its index in the vector has to be in this queue.
  • If borrowing is successful, a new object “BorrowedObject” will be returned, which contains a pointer to the borrows object. This is a safe mechanism, where returning to the pool is done automatically when this object is destroyed.

Shall we look at some code?

template <typename T>
class ObjectPool
{
    AvailableObjectsQueueType availableObjectsQueue;
    std::size_t               objectCount;
    std::vector<T>            objects;
    std::atomic_uint64_t      availableObjectsCount;
    std::atomic_bool          shutdownFlag;

public:
    friend BorrowedObject<T>;
    ObjectPool(std::size_t Size, std::function<T(std::size_t)> objectInitializers);
    ObjectPool()                  = delete;
    ObjectPool(const ObjectPool&) = delete;
    ObjectPool(ObjectPool&&)      = delete;
    ObjectPool& operator=(const ObjectPool&) = delete;
    ObjectPool& operator=(ObjectPool&&) = delete;

    BorrowedObject<T> borrowObj(uint64_t sleepTime_ms = 0);
    BorrowedObject<T> try_borrowObj(bool& success);
    std::size_t       size() const;
    uint64_t    getAvailableObjectsCount() const;
    std::vector<T>&   getInternalObjects_unsafe();
    void              shutdown();
    ~ObjectPool();
};

As you can see, the class is not copyable or movable, for simplicity. Besdies that, the following methods can be seen:

  • The constructor: Takes the size of the pool, and a functor/lambda to initialize the objects. The parameter in this functor/lambda is of type std::size_t. The purpose of this is to be able to have the initialization done as a function of the number of the object in the pool.
  • borrowObj(): A blocking method for borrowing an object from the pool.
  • try_borrowObj(): A non-blocking method for borrowing an object from a the pool, returns a null object on failure.
  • shutdown(): Shutdown the object pool, which calls the destructor of all objects and prevents borrowing further.

You can find the full implementation here.

Conclusion

Here’s a simple design of a lock-free object pool. No more than 200 lines. The feature I find cool in this object pool is that the objects themselves are not being passed around, but only their indices, which can be done quite efficiently on most modern CPUs.

Safe resource allocation and deallocation in C++ (C++11 and C++03)

Introduction

Back then before 2010, one always used Valgrind to make sure that their C++ program isn't leaking memory. There's a whole modern way of using C++ that doesn't even make you need that. If you're using a C++11 program (and even if you're not), and you still require to keep checking whether your code is leaking memory, then you're most likely doing it all wrong. Keep reading if you wanna know the right way that saves you the trouble having to memory check all the time, and uses true object-oriented programming.

Basically this artice is about the concept named RAII, Resource Allocation Is Initialization, and why it's important from my perspective.

The golden rules of why you should do this

My Golden Rules in C++ development:

  1. Humans do mistakes, and just an arrogant would claim that he doesn't (why else do we have something called "error handling"?)
  2. Things can go wrong, no matter how robust your program is, and how careful you are, and you can't plan every possible outcome
  3. A problem-prone program is no better than a program with a problem; so why bother writing a program with problems?

Once you embrace these 3 rules, you'll never write bad code, because your code will be ready for worst case scenario.

In the last few years, I never found a single lost byte in my Valgrind-analysed programs; and I'm talking about big projects, not in the single class level. The difference will be clear soon.

I'm going to start from simple cases, up to more complicated scenarios.

Scenario 1: If you're creating and deleting objects under pointers

Consider the following example:

void DoSomethingElse(int* var)
{
    std::cout << *x << std::endl;
    //... do stuff with x
}
void DoSomething()
{
    int* x = new int;
    DoSomethingElse(x);
    delete x;
}

Let me make this as clear as possible: If you ever, ever use a new followed by delete… you're breaking all the 3 rules we made up there. Why? Here are the rules and how you're breaking them:

1. You may do the mistake of forgetting to write that delete
2. DoSomethingElse() might throw an exception, and hence that delete may not be called.
3. This is a problem prone design, so it's a program with a problem.

The right way to do this: Smart pointers!

What are smart pointers?

I'm sure you've heard of them before, but if you haven't, the idea is very simple. If you define, for example, an integer like this:

void SomeFunction()
{
    int i = 0;
    // do stuff with i
} //here you're going out of the scope of the function

You never worry about deleting i. The reason is that once i goes out of scope, it's deleted automatically (through a destructor). Smart pointers are just the same. They wrap your pointer, such that they are deleted once they are out of scope. Let's look at our function DoSomething() again with smart pointers:

void DoSomething()
{
    std::unique_ptr<int> x(new int); //line 1
    DoSomethingElse(x.get());        //line 2
} //once gone out of scope, x will be deleted automatically, 
  //the destructor of unique_ptr will delete the integer

That's all the change you have to do, and you're done! In line 1, you're creating a unique_ptr, which will encapsulate your pointer. The reason why it's "unique" will become soon clear. Once the a unique_ptr goes out of scope, it'll delete the object under it. So you don't have to worry! This way, the 3 Golden Rules are served. In line 2, we're using x.get() instead of x, because the get() method will return the raw pointer stored inside the unique_ptr. If you'd like to delete the object manually, use the method x.reset(). The method reset() can take a parameter to another pointer, or can be empty to become nullptr.

PS: unique_ptr is C++11. If you're using C++03, you could use unique_ptr from the boost library

Why is it called "unique"?

Generally, multiple pointers can point to the same object. So, going back to the initial example, the following is a possible scenario:

void DoSomething()
{
    int* x = new int(1);
    int* y = x; //now x and y, both, point to the same integer
    std::cout << *x << "\t" << *y << std::endl; //both will print 1
    *x = *x + 1; //add 1 to the object under x
    std::cout << *x << "\t" << *y << std::endl; //both will print 2
    delete x; //you delete only 1 object, not 2!
}

But can you do this with unique_ptr? The answer is *no*! That's why it's called unique, because it's a pointer that holds complete *ownership* of the object under it (the integer, in our case), and it's unique in that. If you try to do this:

void DoSomething()
{
    std::unique_ptr<int> x(new int);
    std::unique_ptr<int> y = x; //compile error!
}  

your program won't compile! Think about it… if this were to compile, who should delete the pointer when the function ends, x or y? It's ambiguous and dangerous. In fact, this is exactly why auto_ptr was deprecated in C++11. It allowed the operation mentioned before, which effectively *moved* the object under it. This was dangerous and unclear semantically, which is why it's deprecated.

On the other hand, you can move an object from one unique_ptr to another! Here's how:

void DoSomething()
{
    std::unique_ptr<int> x(new int);
    std::unique_ptr<int> y = std::move(x);
    //now x is empty, and y has the integer, 
    //and y is responsible for deleting the integer
}

with std::move(x), you convert x to an rvalue reference, indicating that it can be safely moved/modified.

Shared pointers

Since we established that unique pointers are "unique", let's introduce the solution to the case where multiple smart pointers can point to the same object. The answer is: shared_ptr. Here's the same example:

void DoSomething()
{
    std::shared_ptr<int> x(new int(2)); //the value of *x is 2
    std::shared_ptr<int> y = x; //this is valid!
    //now both x and y point to the integer
}

Who is responsible for deleting the object now? x or y? Generally, any of them! The way shared pointers work is that they have a common reference counter. They count how many shared_ptrs point to the same object, and once the counter goes to zero (i.e., the last object goes out of scope), the last object is responsible for deleting.

In fact, using the new operator manually is highly discouraged. The alternative is use make_shared, which covers some corner cases of possible memory leaks. For example, if the constructor of the class use in shared_ptr has a multi-parameter constructor that may throw an exception. Here's how make_shared is used:

void DoSomething()
{
    std::shared_ptr<int> x = std::make_shared<int>(2); //the value of *x is 2
    std::shared_ptr<int> y = x; //this is valid!
    //now both x and y point to the integer
}

 

Note: Shared pointers change a fundamental aspect of C++, which is "ownership". When using shared pointers, it may be easy to lose track of the object. This is a common problem in asynchronous applications. This is a story for another day though.

Note 2: The reference counter of shared_ptr is thread-safe. You can pass it among threads with no problems. However, the thread-safety of the underlying object it points to is your responsibility.

Scenario 2: I don't have C++11 and I can't use boost

This is a common scenario in organizations that maintain very old software. The solution to this is very easy. Write your own smart pointer class. How hard can it be? Here's a simple quick-and-dirty example that works:

template <typename T>
class SmartPtr
{
    T* ptr;
    // disable copying by making assignment and copy-construction private
    SmartPtr(const SmartPtr& other) {} 
    SmartPtr& operator=(const SmartPtr& other) {return SmartPtr();}
public:
    SmartPtr(T* the_ptr = NULL)
    {
        ptr = NULL;
        reset(the_ptr);
    }
    ~SmartPtr()
    {
        reset();
    }
    void reset(T* the_ptr = NULL)
    {
        if(ptr != NULL)
        {
            delete ptr;
        }
        ptr = the_ptr;
    }
    T* get() const //get the pointer
    {
        return ptr;
    }
    T& operator*()
    {
        return *ptr;
    }
    T* release() //release ownership of the pointer
    {
        T* ptr_to_return = ptr;
        ptr = NULL;
        return ptr_to_return;
    }
};

 

and that's it! The method release(), I haven't explained. It simply releases the pointer without deleting it. So it's a way to tell the unique_ptr: "Give me the pointer, and forget about deleting it; I'll take care of that myself".

You can now use this class exactly like you use unique_ptr. Creating your own shared_ptr is a little more complicated though, and depends on your needs. Here's the questions you need to ask yourself on how to design this:

  1. Do you need multithreading support? shared_ptr supports thread-safe reference counting.
  2. Do you need to just count references, or also track them? For some cases, one might need to track all references with something like a vector of references or a map.
  3. Do you need to support release()? Releasing is not supported in shared_ptr, since it depends on reference counting, there's no way to tell other instances to release.

More requirements will require more work, especially that prior to C++11, multithreading was not in the C++ standard, meaning that you're gonna have to use system-specific C++.

For a strictly single-threaded application with C++03, I created a shared pointer implementation that supports releasing. Here's the source code.

Scenario 3: Enable a flag, do something, then disable it again

Consider the following code, which is common in GUI applications:

void GUIClass::addTheFiles(const std::vector<FileType>& files)
{
    this->disableButton();
    for(unsigned i = 0; i < files.size(); i++)
    {
        fileManager.addFile(files[i]);
    }
    this->enableButton();
}

While this looks legitimate way to do things, it's not. This is absolutely no different that the pointer situation. What if adding fails? Either because of a memory problem, or because of some exception? The function will exit without reenabling that button, and your program will become unusable and the user will probably have to restart it.

Solution? Just like before. Don't do it yourself, and get the destructor of some class to do it for you. Let's do this. What do we need? We need a class that will call a function with a reference to some variable on exit. Consider the following class:

class AutoHandle
{
    std::function<void()> func;
    bool done = false; //used to make sure the call is done only once
    // disable copying and moving
    AutoHandle(const AutoHandle& other) = delete;
    AutoHandle& operator=(const AutoHandle& other) = delete;
    AutoHandle(AutoHandle&& other) = delete;
    AutoHandle& operator=(AutoHandle&& other) = delete;
public:
    AutoHandle(const std::function<void()>& the_func)
    {
        func = the_func;
    }
    void doCall()
    {
        if(!done) 
        {
            func();
            done = true;
        }
    }
    ~AutoHandle()
    {
        doCall();
    }
};

 

Let's use it!

void GUIClass::addTheFiles(const std::vector<FileType>& files)
{
    this->disableButton();
    AutoHandle ah([this](){this->enableButton();}); //lambda function that contains the function to be called on exit
    for(unsigned i = 0; i < files.size(); i++)
    {
        fileManager.addFile(files[i]);
    }
} //Now, the function enableButton() will definitely be called definitely on exit.

 

This way, you guarantee that enableButton() will be called when the function exits. This whole thing here is C++11, but doing it in C++03 is not impossible, though I completely sympathize with you if you feel it's too much work for such a simple task, because:

  1. Since there's no std::function in C++03, we're gonna have to make that class template that accepts functors (function objects)
  2. Since there's no lambda functions in C++03, we're gonna have to make the call a new functor for every case (depending on how much you would like to toy with templates, also another big topic)

Just for completeness, here's how you could use AutoHandle in C++03 with a Functor:

template <typename CallFunctor, typename T>
class AutoHandle
{
    bool done; //used to make sure the call is done only once
    CallFunctor func;
    // disable copying by making assignment and copy-construction private
    AutoHandle(const AutoHandle& other) {}
    AutoHandle& operator=(const AutoHandle& other) {return *this;}
public:
    AutoHandle(T* caller) : func(CallFunctor(caller))
    {
        done = false;
    }
    void doCall()
    {
        if(!done)
        {
            func();
            done = true;
        }
    }
    ~AutoHandle()
    {
        doCall();
    }
};

struct DoEnableButtonFunctor
{
    GUIClass* this_ptr;
    DoEnableButtonFunctor(GUIClass* thisPtr)
    {
        this_ptr = thisPtr;
    }
    void operator()()
    {
        this_ptr->enableButton();
    }
};

 

Here's how you can use this:

void GUIClass::addTheFiles(const std::vector<FileType>& files)
{
    this->disableButton();
    AutoHandle<DoEnableButtonFunctor,GUIClass> ah(this); //functor will be called on exit
    for(unsigned i = 0; i < files.size(); i++)
    {
        fileManager.addFile(files[i]);
    }
} //Now, the function enableButton() will definitely be called definitely on exit.

Again, writing a functor for every case is a little painful, but depending on the specific case, you may decide. However, in C++11 projects, there's no excuse. You can easily make your code way more reliable with lambdas.

Remember, you're not bound to the destructor to do the calls. You can also call doCall() yourself anywhere (equivalent to reset() in unique_ptr). But the destructor will *guarantee* that the worst case scenario is covered if something went wrong.

Scenario 4: Opening and closing resources

This could even be more dangerous than the previous cases. Consider the following:

void ReadData()
{
    int handle = OpenSerialPort("COM3");
    ReadData(handle);
    Close(handle);
}

 

This form is quite common in old libraries. I faced such a format with the HDF5 library. If you read the previous sections, you'll get the problem with such usage and the idea on how to fix it. It's all the same. You *should never* close resources manually. For my HDF5 problem, I wrote a SmartHandle class that guarantees that HDF5 resources are correctly closed. Find it here. Of course, the formal right way to do this is to write a whole wrapper for the library. This may be an over-kill depending on your project constraints.

Notes on Valgrind

If you follow these rules, you'll be 100% safe with the resources you use. You rarely will ever need to use Valgrind. However, when you write the classes that we talked about (such as AutoHandle, SmartPtr, etc), it's very, very important not only to test it with Valgrind, but also to write good tests that will cover every corner case. Because once you do these classes right, you never have to worry about them. If you do them wrong, the consequences could be catastrophic. Surprising? Welcome to object-orient programming! This is exactly what "separation of concerns" mean.

Conclusion

Whenever you have to do, then undo something, then keep in mind that you shouldn't have this manually done. Sometimes it's safe and trivial, but many times it may lead to simply bad and error-prone design. I covered a few cases and different ways to tackle the issue. By following these examples, I guarantee that your code will become more compact (given that you're using C++11) and way more reliable.

Just a side-note on standards and rules

Sometimes when I discuss such issues related to modern designs, people claim that they have old code, and they don't want to change the style of the old code for consistency. Other times they claim that this would be violating some standard issued by some authority. All these reasons don't show that doing or not doing this is right or wrong. It just shows that someone doesn't want to do them because they're "following the rules". To me (and I can't emphasize enough that this is my opinion), it just is like asking someone "Why do you do this twice a day, how does it help you?", and get the answer "Because my religion tells me I have to do it". While I respect all ideologies, such an answer is not a rational answer that justifies the pros and cons of doing something. That answer doesn't tell why doing that twice a day helps that guy's health, physically or mentally or otherwise. He just is following a rule *that shouldn't be discussed*. I'm a rational person and I like discussing things by putting them on the table, which helps in achieving the best outcome. If someone's answer is "I have a standard I need to follow it", then that effectively and immediately closes the discussion. There's nothing else to add. Please note that I'm not encouraging breaking the rules here. If your superior tells you how to do it, and you couldn't convince them otherwise, then just do what he tells you, because most likely your superior has a broader picture of other aspects of a project that don't depend on code only.

 

Can I use Qt Creator without Qt?

Qt Creator… the best IDE for developing C and C++ I’ve ever seen in my life. Since I like it that much, I’m sharing some of what I know about it.

DISCLAIMER: I’m not a lawyer, so don’t hold me liable for anything I say here about licensing. Anything you do is your own responsibility. 

What does using Qt Creator without Qt mean?

It simply means that you don’t have to install or use the Qt libraries, including qmake. There are many reasons why that may be the case:

  • You may have an issue with licensing since qmake is LGPL licensed
  • You may not have the possibility to install qmake alone without its whole gear, as is the case in Windows
  • You may not want to compile the whole Qt libraries if the pre-compiled versions that work with your compiler is not available

While I love Qt and use it all the time, I follow the principle of decoupling my projects from libraries if I don’t use them. But since I love Qt Creator, I still want to use it! Reasons will become clear below.

What are the ingredients of this recipe?

  1. Qt Creator
  2. CMake
  3. A compiler (gcc, MinGW or Visual Studio, or anything else)

You don’t need to download the whole Qt SDK. Just Qt Creator. It’s about 90 MB.

Basic steps

After installing the 3 ingredients, make sure that Qt Creator recognized that CMake exists on the computer. The next picture (click on it to magnify it) is how it looks like if CMake was found. If it doesn’t find CMake, on Windows, most likely the reason is that you chose in the installation not to add CMake to the system’s PATH. Eventually, you can just add it manually if it can’t be found automatically.

Next, make sure that Qt recognizes the compiler and the debugger as you see in the next pictures. Again, you can add them manually.

For Visual Studio to be found automatically, I guess the environment variable VS140COMNTOOLS has to be defined. “140” is version 14 of Visual Studio, which is the version number of Visual Studio 2015. It’s defined by default when Visual Studio is installed. For MinGW to be detected automatically, the bin of MinGW has to be in PATH.

Adding the complete tool-chain/kit

Go to the “Kits” tab. If you have Qt libraries installed and configured, you’ll see them there. I don’t like Qt SDK, and I usually compile my own versions of Qt. You’ll see that in the next screenshot.

What you see in the next screenshot are 3 kits that use Qt libraries, and another that do not. The free version of Visual Studio (2012 and later) comes with both 32-bit and 64-bit compilers. You can choose any one of them, or both (like I do). I don’t use MinGW often on Windows, so I install only 1 version of it (I use MinGW on Windows primarily because it offers the “-pedantic" flag, which gives the chance to experiment with C++ standard-approved features).

Now click “Add”, and choose the fields as shown below (most importantly, configure CMake correctly, it’s a little tricky, and the error on Qt Creator doesn’t tell you what you did wrong)

If you’re getting unexplained errors when running CMake, follow the following instructions carefully

Visual Studio

The following is a screenshot of how Visual Studio configuration should look like (screenshot is for 64-bit)

Set the following

  • For the 32-bit compiler: Choose the compiler with (x86)
  • For the 64-bit compiler: Choose the compiler with (x86_am64) or (amd64)
  • Choose “None” for Qt version
  • Choose the correct debugger
  • Most importantly: After choosing CMake from the drop-down list, make sure that CMake generator is chosen to be “NMake Makefiles”, and the Extra generator to be CodeBlocks

The last piece of information is the invitation to all kinds of problems. If running CMake doesn’t work, you most likely configured that part incorrectly.

MinGW

The following is a screenshot of how MinGW configuration should look like:

  • Choose “None” for Qt version
  • Choose the correct debugger
  • Most importantly: After choosing CMake from the drop-down list, make sure that CMake generator is chosen to be “MinGW Makefiles”, and the Extra generator to be CodeBlocks

The last piece of information is the invitation to all kinds of problems. If running CMake doesn’t work, you most likely configured that part incorrectly.

I have to say that from my experience, Qt Creator some times fails to run CMake with MinGW with no good reason. I fix this by switching the “Extra generator” to “CodeLite”, and back to “CodeBlocks”. I’m currently using Qt Creator 4.2.0. It might be a bug?

gcc/g++

It’s very easy to get gcc/g++ to work. The following is a screenshot:

  • Choose “None” for Qt version
  • Most importantly: After choosing CMake from the drop-down list, make sure that CMake generator is chosen to be “CodeBlocks – Unix Makefiles”.  It’s chosen by default, so nothing to worry about.

And you’re done!

What can I do with CMake + Qt Creator?

No ultimate dependence on Qt Creator!

Visual Studio solutions don’t work without Visual Studio. Netbeans configuration is stored in “netbeans project directory”. it’s very annoying that every IDE has its own weird format! I never stop hearing people complaining about porting their programs to other systems and having problems because of this.

One of the things I like most about Qt Creator is the fact that it doesn’t have any special project files for itself. The Makefile itself (CMake file, in this case, or qmake otherwise) is the project file. This ensures not only 100% portability (since both make systems are cross-platform) but also independence of Qt Creator itself. In the future, if the whole Qt Creator project goes down, your project won’t be affected at all.

What if I want to use Qt Creator just as an IDE without having to build a project through it?

I had the “luxury” of getting a project from a space agency to add some features to it, where they had their own build system. I wasn’t able to use Qt Creator to build the project, but…

But why use Qt Creator? Simply because you’ll get

  • Syntax highlighting
  • target functions and classes following
  • advanced refactoring options (like renaming classes, functions, etc)
  • Search capabilities in source files.
  • Repository updates checking
  • Type hierarchy navigation with a click
  • And lots more!

With all these, yet, while the project contains a few thousand source files, I was able to add *all* files and parse them with Qt using a few CMake lines. And it was fast enough to handle all that!

How to do it? How to add all these files in one step?

CMake supports recursive adding of source files. Consider the following CMake file (always called CMakeLists.txt):

cmake_minimum_required(VERSION 3.0)
PROJECT(MyProject)

file(GLOB_RECURSE MySrcFiles
 "${CMAKE_SOURCE_DIR}/src/*.cpp"
 "${CMAKE_SOURCE_DIR}/src/*.h"
 )

add_library(MySrcFilesLib
 ${MySrcFiles}
 )

add_executable(MyExecutable main.cpp)

target_link_libraries(MyExecutable MySrcFilesLib)

The first two lines are obvious. The “file" part recursively finds all the files under the directory mentioned and saves them in the variable ${MySrcFiles}. The variable ${CMAKE_SOURCE_DIR} is basically the directory where you cmake file “CMakeLists.txt" is located. Feel free to set any directory you find fit.

The “add_library” part creates a library from the source (and header) files saved in ${MySrcFiles}.

The “add_executable” part  “creates” the executable, which is then linked to the libraries you added.

That last linking part is not necessary if you don’t want to build in Qt Creator.

With such a simple CMake file, Qt Creator was smart enough to add all the source files, parse them, and give me all the functionality I needed to edit that project successfully.

SSH proxy with Putty that reconnects automatically

Introduction

Putty can be used to tunnel your connection through your SSH server. It creates a SOCKS proxy that you can use anywhere. The annoying part there is that if Putty disconnects for any reason, you’ll have to reestablish the connection manually. If you’re happy reconnecting putty manually all the time, then this article is not for you. Otherwise, keep reading to see how I managed to find a fair solution to this problem.

Ingredients

For this recipe, you need the following

  1. Putty, of course. You could use “Putty Tray”, which supports being minimized to taskbar.
  2. Python 3

I understand that you may not have or use Python, but the solution I used depends on it, unfortunately. You could download Miniconda 3,  which is a smaller version of Anaconda 3. Anaconda is my recommended Python release.

How does this work in a nutshell?

The idea is very simple.

  1. Create a Putty session configuration that suites you
  2. Configure the session to close on disconnect
  3. Create a script that will relaunch your putty session on exit
  4. Make sure that the launcher you’re gonna use doesn’t keep a command prompt window open (which is why I’m using Python; pythonw.exe solves this issue).

Configuring Putty

To create a putty tunnel proxy, load your favorite session configuration, and then go to the tunnels configuration, and use the following settings, assuming you want the SOCKS proxy port to be 5150. Choose any port you like.

PuttySOCKS

After connecting with this configuration, this creates a SOCKS that you can connect to with the loopback IP-Address, 127.0.0.1 at port 5150.

One more important thing to configure in putty, is to set putty to exit on failure. This is important because we’re gonna set putty to reconnect through a program that detects that it exited to start it again.

PuttySessions

Configuring Python and the launch script

Assuming you installed Python 3 and included in your PATH, now you have to install a package called tendo. This package is used to prevent running multiple instances of the program.

To install it, first, run the command prompt of Windows (in case Python is installed directly on the system drive, C:\, you have to run it as administrator). In the command prompt, to ensure that Python is working fine, run:

python -V

If this responds with no error and gives a message like:

Python 3.5.1 :: Anaconda 4.1.0 (32-bit)

Then you’re good to go! Otherwise, make sure that Python is added to PATH, and try again.

To install tendo, simply run in command prompt

pip install tendo

After that, in the directory of putty, write this script to a file:


import subprocess
from tendo import singleton
import time

me = singleton.SingleInstance() #exits if another instance exists

while True:
 print("Starting putty session...")
 subprocess.call('"putty.exe" -load "mysession"')
 print("Putty session closed... restarting...")
 time.sleep(5)   #sleep to avoid infinitely fast restarting if no connection is present

The name “mysession” is the name of your session in putty. Replace it with your session name.

This script simply checks first that the current instance is the only instance, and then runs an infinite loop that keeps running the program every time it exits. So we made putty exit on disconnect, and this program will just run infinitely

Save this script to some file like “MyLoop.pyw”.

Testing the loop

Python has two executables. First is “python.exe”, and the other one is “pythonw.exe”. The difference is quite simple. The first one, “python.exe”, runs your script as a terminal program. The second one, “pythonw.exe”, runs your script without a terminal. It’s designed for GUI applications. Now “python.exe” is not what we need, but it still is useful for debugging the script. So whenever you have a problem or when you want to run this for the first time, switch to/use “python.exe”. Once you’re done and everything looks fine, switch to “pythonw.exe”.

Final step: The execution shortcut

This is not necessary, but it makes things easy. It makes it easy to control whether you want to use “python.exe” or “pythonw”. It makes it also possible to make your script handy. Simple create a shortcut to “python.exe” or “pythonw.exe”, and the first command line parameter should be your script. Remember that “Start in” has to be the directory, where Putty is located. Following picture is an example of how that shortcut should look like.

PuttyLoopShortcut

And you’re good to go! Start with “python.exe”, and once it works, and you find that every time you exit putty or a disconnection happens it relaunches it, switch to “pythone.exe”, and you’re done.

Final notes

This is not a super-fancy solution. This is a solution that’ll get you through and get the job done. If you want to exit the looping script, you’ll have to kill Python from your task-manager.

You may create a fancy taskbar app that’ll do the looping and exits, which I would’ve done if I had the time. So please share it if you do! You can use PyQt or PySide for that.

Conclusion

With this, you’ll keep reconnecting on disconnect, and you can get all your software to use your ssh server as a SOCKS proxy. Cheers!

Tunnel through https to your ssh server, and bypass all firewalls – The perfect tunnel! (HAProxy + socat)

Disclaimer

Perhaps there’s no way to emphasize this more, but I don’t encourage violation of corporate policy. I do this stuff for fun, as I love programming and I love automating my life and gaining more convenience and control with technology. I’m not responsible for any problem you might get with your boss in your job for using this against your company’s firewall, or any similar problem for that matter.

Introduction

I was in a hotel in Hannover, when I tried to access my server’s ssh. My ssh client, Putty, gave this disappointing message

PuttyConnectionRefused

At first I got scared as I thought my server is down, but then I visited the websites of that server, and they were fine. After some investigation, I found that my hotel blocks any access to many ports, including port 22, i.e., ssh. Did this mean that I won’t have access to my server during my trip? Not really!

I assume you’re using a Windows client, but in case you’re using linux, the changes you have to do are minimal, and I provide side-by-side how to do the same on a linux client. Let me know if you have a problem with any of this.

Tunneling mechanism, and problems with other methods that are already available

There are software that does something similar for you automatically, like sslh, but there’s a problem there.

What does sslh do?

When you install sslh on your server, you choose, for example, port 443 for it. Port 443 is normally for http-ssl (https), that’s normally taken by your webserver. So you change also your webserver’s port to some arbitrary port, say 22443. Then, say you want to connect to that server: sslh analyzes and detects whether the incoming network packets are ssh or http. If the packets are ssh, it forwards them to port 22. If the packets looks like https, it forwards them to the dummy port you chose, which is 22443 as we assumed.

What’s the problem with sslh, and similar programs?

It all depends on how sophisticated the firewall you’re fighting is. Some firewalls are mediocre, and they just blindly open port 443, and you can do your sslh trick there and everything will work fine. But smart firewalls are not that dull; they analyze your packets and then judge whether you’re allowed to be connected. Hence, a smart firewall will detect that you’re trying to tunnel ssh, and will stop you!

How do we solve this problem?

The solution is: Masquerade the ssh packets inside an https connection, hence, the firewall will have to do a man-in-the-middle attack in order to know what you’re trying to do. This will never happen! Hence, I call this solution: “The perfect solution“.

How to create the tunnel?

I use HAProxy for this purpose. You need that on your server. It’s available in standard linux systems. In Debian and Ubuntu, you can install it using

sudo apt-get install haproxy

You will need “socat” on your client to connect to this tunnel. This comes later after setting up HAProxy.

How does HAProxy work?

I don’t have a PhD in HAProxy, it’s fairly a complicated program that can be used for many purposes, including load balancing and simple internal proxying between different ports, and I use it only for this purpose. Let me give a brief explanation on how it works. HAProxy uses the model of frontends and backends. A frontend is what a client sees. You set a port there, and a communication mode (tcp, for example). You tell also a frontend “where these packets should go”, based on some conditions (called ACL, Access Control Lists). You choose to which backend the packets have to go. The backend contains information about the target local port. So in short words, you tell HAProxy where to forward these packets from the frontend to the backend based on some conditions.

A little complication if you use https websites on the same server

If you use https webserver on the same machine, you’ll have a problem. The problem is that you’ll need to check whether the packets are ssh before decrypting them, because once you decrypt them, you can’t use them as non-encrypted again (hence haproxy doesn’t support forking encrypted and decrypted packets side-by-side). This is because you choose to decrypt in your frontend. That’s why we use SNI (Server Name Indication) and do a trick:

  • If there’s no SNI (no server name, just IP address), then forward to ssh
  • If server name used is ssh.example.com (some subdomain you choose), then forward to ssh (optional)
  • If anything else is the case, forward to the https web server port

We also use 2-frontends. The first one is the main one, and the second is a dummy frontend, and is only used to decrypt the ssh connection’s https masquerade. HAProxy decrypts only in frontends.

Let’s do it!

The configuration file of HAProxy in Debian/Ubuntu is

/etc/haproxy/haproxy.cfg

You could use nano, vi or vim to edit it (you definitely have to be root or use sudo). For example:

sudo nano /etc/haproxy/haproxy.cfg
Assumptions
  1. Your main https port is 443
  2. Your main ssh port is 22
  3. Your https webserver is now on 22443
  4. The dummy ssh port is 22222 (used just for decryption, it doesn’t matter what you put it)
Main frontend

This is the frontend that will take care of the main port (supposedly 443). Everything after a sharp sign (#) on a line is a comment.

#here's a definition of a frontend. You always give frontends and backends a name
frontend TheMainSSLPort
 mode tcp
 option tcplog
 bind 0.0.0.0:443 #listen to port 443 under all ip-addresses

 timeout client 5h #timeout is quite important, so that you don't get disconnected on idle
 option clitcpka

 tcp-request inspect-delay 5s
 tcp-request content accept if { req_ssl_hello_type 1 }

 #here you define the backend you wanna use. The second parameter is the backend name
 use_backend sshDecrypt if !{ req_ssl_sni -m found } #if no SNI is given, then go to SSH
 use_backend sshDecrypt if { req_ssl_sni -i ssh.exmple.com } #if SNI is ssh.example.com, also go to ssh

default_backend sslWebServerPort #if none of the above apply, then this is https

In the previous configuration, we have two paths for the packets, i.e., two backends:

  1. If the connection is ssh, the backend named “sshDecrypt” will be used.
  2. If the connection is https, the backend named “sslWebServerPort” will be used.
The https backend

I put this here first because it’s easier. All you have to do here is forward the packets to your webserver’s port, which we assumed to be port 22433. The following is the relevant configuration:

backend sslWebServerPort
 mode tcp
 option tcplog
 server local_https_server 127.0.0.1:22443 #forward to this server, port 22443

Now the https part is done. Let’s work on the ssh part.

The ssh front- and backends

We’ll have to use a trick, as mentioned before, to get this to work. Once a judgment is done for packets to go to ssh (using SNI), the packets have to be decrypted. This is not possible in a backend, thus we use a backend to forward the packets to a dummy frontend that decrypts the packets, and then send these to another backend to send the packets to the ssh server.

backend sshToDecryptor
 mode tcp
 option tcplog
 server sshDecFrontend 127.0.0.1:22222
 timeout server 5h

This forwards the packets to port 22222. Now we build a frontend at that port that decrypts the packets.

frontend sshDecyprtionPort
 timeout client 5h
 option clitcpka

 bind 0.0.0.0:22222 ssl crt /path/to/combined/certs.pem no-sslv3
 mode tcp
 option tcplog

 tcp-request inspect-delay 5s
 tcp-request content accept if HTTP

default_backend sshServ #forward to the ssh server backend

The file /path/to/combined/certs.pem has to contain your private key, certificate and certificate chain in one file of your SSL. Concatenate them all in one file.

Finally, the back end to the ssh server:

backend sshServ
 mode tcp
 option tcplog
 server sshServer1 127.0.0.1:22
 timeout server 5h

That’s all you need to create the tunnel.

Test your haproxy configuration on the server

To test your configuration, stop HAProxy using

sudo service haproxy stop

and run the following command to start HAProxy in debug mode:

sudo haproxy -d -f /etc/haproxy/haproxy.cfg

The “-d” flag is debug mode, and the “-f” flag is used to choose the config file. The typical output looks like:

Available polling systems :
 epoll : pref=300, test result OK
 poll : pref=200, test result OK
 select : pref=150, test result FAILED
Total: 3 (2 usable), will use epoll.
Using epoll() as the polling mechanism.

If you don’t get any errors, then your configuration is OK. Press ctrl+c to close this foreground version of HAProxy, and start the HAProxy service:

sudo service haproxy start
Test your tunnel from your client

To test your client, you can use OpenSSL. The following command will connect to the server.

openssl s_client -connect ssh.example.com:443

You can also use your IP address. This will connect to HAProxy, and will be interpreted as ssh, if your configuration is correct. Once it reaches the ssh server, you’re good! You’ll see lots of stuff from OpenSSL, and finally a few seconds later the following message will appear if you’re using a Debian server:

SSH-2.0-OpenSSH_6.7p1 Debian-5+deb8u2

The message will change depending on your server’s linux distribution and OpenSSH server version. Once you see this message, this shows that you reached your ssh server successfully. You now have to setup a connection to your server’s tunnel.

Connecting to the ssh server using the tunnel

You need socat to connect to the https tunnel, and then you ssh to that tunnel. The program, socat, can be downloaded either as a zip package (please google it and try it, if it works, great. I had problem with OpenSSL dlls back then when I first tried this). Or you can use Cygwin to get it. Cygwin is a set of linux programs compiled for Windows. Don’t get too confident in the installer and just download all its components or you’ll easily consume 30 GB of diskspace and consume 10 hours installing these components. Just download what you need.

In case you’re using a linux client, socat is a standard program in linux. Just install it with your default package manager, e.g. in Debian/Ubuntu

sudo apt-get install socat
Running the socat tunnel

Open your Windows command prompt as administrator (or linux terminal), and use the following command to connect to your server using socat

socat -d TCP-LISTEN:8888,fork,range=127.0.0.1/32 OPENSSL-CONNECT:ssh.example.com:443,verify=0

Here we use port 8888 as an intermediate local port on your client. Once this works with no errors, you’re good to use an ssh client.

Warning: A “-v” flag is verbose for socat. Don’t do this for serious connections, but only for tests, as it writes everything on the terminal where socat is running, and since Windows Command Prompt prints messages synchronously, it’ll slow down everything for you.

Connect with your ssh client

Assuming you’re using putty, this is how your client should look like

PuttySSHTunnel

Or if you’re using a linux client, simply use this in your terminal

ssh 127.0.0.1 -p 443

And you should connect, and you’re done! Congratulations! You’re connected to your ssh server through https.

What about the other ports, other than ssh?

Once you got ssh working, everything else is easy. You can use an ssh SOCKS proxy tunnel. Putty does this easily for you. All you have to do is configure your connection as in the picture:

PuttySOCKS

This creates a SOCKS proxy. To use it, I provide the following example that I do on Thunderbird to secure my e-mail connections. You can do the same on any program you like, even on your web browser:

SOCKSProxySettingsThunderbird

You can do the same on linux. Please google how to create an ssh tunnel on linux for SOCKS proxy.

Conclusion

You’re connected! You can bypass any firewall you want just by being given access port 443. The only way to stop you is by cutting off your internet completely 🙂

Share this if you like it! I welcome your questions and any other thoughts in the comments.

Start your computer remotely using Raspberry Pi

Why do that?

I have my server at home, which contains multiple hard-drives with all my data on them in RAID configuration. I have my way to access this server remotely from any where in the world, but in order to access the server, it has to be turned on! So the problem is: How do I turn my server when I need it?

This whole thing took me like 4 hours of work. It turns out it’s much easier than it looks like.

Why is keeping the server turned on all the time a bad idea?

Of course, a web-server can be left turned on all the time to be accessed from everywhere at any time, but a server that is used to store data… I don’t see why one would turn it on unless one needs something from it. In fact, I see the following disadvantages in keeping the server on all the time and benefits for being able to turn it on remotely:

  1. High power consumption… although the server I use is low-power, but why use 150 W all the time with no real benefit?
  2. Reduction of the server life-span, components like the processor has a mean life-time that will be consumed by continuous operation.
  3. Fans wear out and become noisier when used for longer times.
  4. What if the server froze? I should be able to restart it remotely.

What do you need to pull this off?

  • Raspberry Pi (1 or 2, doesn’t matter, but I’ll be discussing 2)

Pi2ModB1GB_-comp

  • 5 Volts Relay. I use a 4 Channel relay module. It costs like $7 on eBay or Amazon depending on how many channels you need.

Relay5V-4Ch

  • Jumper cables (female-female specifically if you’re using Raspberry Pi + a similar Relay Module) to connect the Raspberry Pi to the Relay Module

JumperCables

 

  • More wires and connectors to connect the server to the Raspberry Pi cleanly, without having a permanent long cord permanently connected to the server. I used scrap Molex 4-pin connectors:

MolexConnector

I cut a similar connector in half and used one part as a permanent connector to the server, and the other part went to the wire that goes to the Relay Module.

  • Finally, you need some expertise in Linux and SSH access as the operating system I use on my Raspberry Pi is Raspbian. This I can’t teach here, unfortunately, as it’s an extensive topic. Please learn how to access Raspberry Pi using SSH and how to install Raspbian. There are tons of tutorials for that online on the Raspbian and Raspberry websites that teach it extensively. If you’re using Windows on your laptop/desktop to SSH to the Raspberry Pi, you can use Putty as an SSH client.

Once you’re in the terminal of your Raspberry Pi, you’re ready to go!

How control is done using Raspberry Pi:

If you already know how to control Raspberry Pi 2 GPIO pins, you can skip this section.

On Raspberry Pi 2, there is a set of 40 pins that contain 26 pins that are called GPIO (General Purpose Input/Output) pins. GPIO pins can be controlled from the operating system of Raspberry Pi. I use Raspbian as an operating system of my Raspberry Pi 2 and the Python scripting language. In Raspbian, python is pre-equipped with what’s necessary to start controlling GPIO pins very easily.

Why Python? Because it’s super-easy and is very popular (it took me a few days to become very familiar with everything in that language… yes, it’s that simple). Feel free to use anything else you find convenient for you. However, I provide here only Python scripts.

The following is a map of these pins:

Raspberry-Pi-GPIO-Layout-Worksheet-page-001

 

And the following is a video where I used them to control my 4-channel Relay Module:

And following is the Python script I used to do that. Lines that start with a sharp (#) are comments:

Note 1: Be aware that indentation matters in Python for each line (that’s how you identify scopes in Python). If you get an indent error when you run the script, that only means that the indentation of your script is not consistent. Read a little bit about indentation in Python if my wording for the issue isn’t clear.

Note 2: You MUST run this as super-user.

#!/usr/bin/python3

import RPi.GPIO as GPIO
import time

GPIO.setmode(GPIO.BCM)

#The following is a function that inverts the current pin value and saves the new state
def switchPortState(portMapElement):
    GPIO.output(portMapElement[0],not portMapElement[1])
    pe = [portMapElement[0],not portMapElement[1]]
    return pe


#There's no easy way to know the current binary state of a pin (on/off, or 1/0, or True/False), so I use this structure, which is a dictionary array that goes from up to the number of channels one wants to control (I used GPIO channels 2,3,5,6). The first element of each element is the GPIO port number, and the second element is the assumed initial condition. The latter will invert in each step as in the video
portMap = {}
portMap[0] = [2,False]
portMap[1] = [3,False]
portMap[2] = [5,False]
portMap[3] = [6,False]

for i in range(len(portMap)):
    GPIO.setup(portMap[i][0], GPIO.OUT)

while True:
    for i in range(len(portMap)):
    portMap[i] = switchPortState(portMap[i])
    time.sleep(0.5)

If you access your Raspberry Pi using SSH, then you can use “nano” as an easy text editor to paste this script. Say if you wanna call the script file “script.py”, then:

nano script.py

will open a text editor where you can paste this script. After you’re done, press Ctrl+X to exit and choose to save your script. Then make this script executable (linux thing), using:

chmod +x script.py

then run the script using

sudo ./script.py

This will start the script and the leds will flash every half a second.

Again, we’re using “sudo” because we can only control Raspberry Pi’s GPIO pins as super user. There are ways to avoid putting your password each time you wanna run this, which will be explained later.

Get a grasp on the concept of turning the computer on/off:

There are two ways to turn your computer on/off electronically without using the switch and without depending on the bios (LAN wake-up, etc…):

  1. If you’re lucky, the power button’s wires will be exposed and you can immediately make a new connection branch in the middle and lead it outside the computer. Shorting the wires is equivalent to pressing the power button.
  2. Use the power supply’s motherboard green wire. Shorting this wire to ground (to any black wire) will jump the computer and start it.

The following is a random picture for a computer power supply. A clip is used to short green with ground.

ATX-Power-Supply-ConnectorI used the first way of the two ways I mentioned. Here’s a video showing how it looks like:

So shorting these two wires that come from the power button for some time (half a second) is what I did and that works as being equivalent to pressing the power button. After you manage how to connect these, then you can go to the next step.

Connecting the power-wires to the Relay Module:

After learning how to control the Relay Module, and learning how to take a branch from the computer case that if you would short the computer starts, the remaining part is to connect the power-wires, that you got from your computer power button or green+black power supply cords, and connect them to the Relay Module. The following video shows the concept and the result.

Now you have the two terminals that if you short together the computer starts, let’s get into a little more details.

Important: One important thing to keep in mind when doing the wire connection to the Relay Module, is that we need to connect them in a way that does not trigger the power switch if the Raspberry Pi is restarted. Therefore, choose the terminal connections to be disconnected by default, as the following picture shows:

Relay5V-4Ch-defaults

 

Connect your power-wires two terminals to any of the marked two in the picture. The way the Relay Module works is that when it’s turned off, it switches whether the middle terminal is connected to left or right. By default it’s connected to right, and that’s what we can see in the small schematic under the terminals.

After doing the connections properly, now you can use the following script turn your computer on:

#!/usr/bin/python3

import RPi.GPIO as GPIO
import time
import argparse

#initialize GPIO pins
GPIO.setmode(GPIO.BCM)

#use command line arguments parser to decide whether switching should be long or short
#The default port I use here is 6. You can change it to whatever you're using to control your computer.
parser = argparse.ArgumentParser()
parser.add_argument("-port", "--portnumber", dest = "port", default = "6", help="Port number on GPIO of Raspberry Pi")
#This option can be either long or short. Short is for normal computer turning on and off, and long is for if the computer froze.
parser.add_argument("-len","--len", dest = "length", default = "short" , help = "Length of the switching, long or short")

args = parser.parse_args()

#initialize the port that you'll use
GPIO.setup(int(args.port), GPIO.OUT)


#switch relay state, wait some time (long or short) then switch it back. This acts like pressing the switch button.
GPIO.output(int(args.port),False)
if args.length == "long":
    time.sleep(8)
elif args.length == "short":
    time.sleep(0.5)
else:
    print("Error: parameter -len can be only long or short")
GPIO.output(int(args.port),True)

Save this script as, say, “switch.py”, and make it executable as we did before:

chmod +x switch.py

Now you can test running this script, and this is supposed to start your computer!

sudo ./switch.py

Running the program from a web browser

You could be already satisfied by accessing switching the computer remotely using ssh, but I made the process a little bit fancier using a php webpage.

CAVEAT:

Here, I explain how you could get the job done to have a webpage that turns on/off your computer. I don’t focus on security. Be careful not to make your home/work network’s components accessible to the public. Please consult some expert to verify that what you’re doing is acceptable and does not create a security threat for others’ data.

Using superuser’s sudo without having to put the password everytime

In order to access this from the web, you have to change pins states without having to enter the password.

To do this, run the command

sudo visudo

This will open a text editor. If your username for Raspberry Pi’s linux is myuser, then add the following lines there in that file:

www-data ALL=(myuser) NOPASSWD: ALL
myuser ALL=(ALL) NOPASSWD: ALL

This will allow the Apache user to execute the sudo command as you, and you have absolute super-user power. Now notice that this is not the best solution from a security point of view, but it just works. The best solution is to allow the user www-data to run a specific command as root. Just replace the last “ALL” of www-data with a comma separated list of the commands you wanna allow www-data to run, and replace “myuser” between the parenthesis with “root”. I recommend you do that after having succeeded, to minimize the possible mistakes you could do. This is a legitimate development technique, we start with something less perfect, test it, then perfect it one piece at a time.

Installing Apache web-server

First, install the web-server on your Raspberry Pi. Do this by running this set of commands in your terminal:

sudo apt-get install apache2
sudo apt-get install php5
sudo apt-get install libapache2-mod-php5
sudo a2enmod php5
sudo a2enmod alias
sudo service apache2 restart

I hope I haven’t forgotten any more necessary components, but there are, too, many tutorials out there and forums discussing how to start an apache webserver.

If the apache installation is a success, then you could go to your web-browser and see whether it’s working. First, get the hostname of your Raspberry Pi by running this command in Raspberry Pi’s terminal

hostname

Let’s say your hostname is “myhostname”. Now go to your browser, and enter this address:

http://myhostname/

If this gives you a webpage, then the web-server is working fine and you can proceed. Otherwise, if the browser gives an error, you have to debug your web-server and get it working. Please consult some tutorial online to help you run the apache server.

Creating the webpage:

The default directory where the main webpage is stored in apache is either “/var/www/” or “/var/www/html/”. Check where the index.html that you saw is, and place the new php file there. Say that php file has the name “control.php”, and say the default directory is “/var/www/”. Then, go to that directory using

cd /var/www/

Now create the new php page using the command

sudo nano control.php

And use the script

<!DOCTYPE html>
<html>
<head>
<title>Control page</title>
</head>
<body>
<form action="#" method="post">
 <center>
 <select name="switchlen">
 <option value="short">Short</option>
 <option value="long">Long</option>
 </select>
 <input type="submit" value="Switch server" name="submit">
 </center>
</form>

<?php
if(isset($_POST['submit']))
{
 if($_POST['switchlen'] == "long")
 {
 echo("This is long");
 echo("<br>");
 $command = "sudo -u myuser sudo /usr/bin/python3 /home/myuser/switch.py -len long > debug.log 2>&1";
 }
 else if($_POST['switchlen'] == "short")
 {
 echo("This is short");
 echo("<br>");
 $command = "sudo -u myuser sudo /usr/bin/python3 /home/myuser/switch.py -len short > debug.log 2>&1";
 }
 $output = shell_exec($command);
 echo("Script return: ");
 var_dump($output);
}
?>
</body>
</html>

Don’t forget to change the path of the script to the correct path of your script “switch.py”, and change “myuser” to the username you’re using in your Raspberry Pi. After you’re done, press Ctrl+X to save and exit.

To use this page and have it successfully run the script, you have to do one more thing, which is making apache’s user own this file. The username for the apache web-server is called “www-data”, so assuming you called the file “control.php”, you have to run this command:

sudo chown www-data:www-data control.php

After you run this command, you should use “sudo nano” to edit this file instead of only nano, since your linux user doesn’t own the file anymore.

Also, don’t save the file in any place other than the original folder of apache (like /var/www), at least not before you make sure it works. Adding new folders that apache recognizes is something that requires additional steps that I don’t discuss here. Please consult an apache tutorial for that.

To test the new php page, go to the link:

http://myhostname/control.php

If the website doesn’t work, check the the file “debug.log” in the same path of control.php. It will tell you what was wrong in the script.

Controlling your computer from outside your home

If you’re in a home network, then you can only access that webpage from within the network. If you would like to access it from outside the network, you have to have VPN access to your home network. Consider achieving this using OpenVPN. That’s how I do it. I may write an article about it some time in the future.

Conclusion

I hope this article has given you an idea on how to control your appliances using Raspberry Pi. We have shown how to turn a computer on/off remotely.

I do this for fun, but also more professional tasks can be achieved using similar scripts, such as controlling scientific experiments.

Designing high magnetic field coils efficiently

It sounds simple, I agree. But the problem is like designing a car. You could simply put some wheels together with simple set of chains and create a “car” that moves in your front lawn. But then if you want to design a professional car, you’ll spend much more time and professionalism to do it. And if you wanna build a sports car, you’ll have to fine tune every single parameter of your components to push the edge of what you can achieve.

This story doesn’t apply only to coils, but everything you design (actually I learned it from a book called “Professional C++” in a chapter discussing efficiency of algorithm writing, it’s not far from this case).

Concerning coils, if you just want to generate a few micro Tesla with no concerns about the homogeneity for the coil you’re designing, then it doesn’t matter how you do it. You could just wind a bunch of wires on your hand and your done. Today I’m not talking about such a thing. Today I’m gonna talk about coils that you want to use to generate relatively huge fields, with reasonable homogeneity.

For such specifications, if you’re careless about the details, you may end up wasting lots of time for experimental experience and money for cooling and power supplies. If you plan it carefully, you’ll save lots of money.

Parameters of the system

Any coils you design to generate magnetic fields will have a set of parameters that you have to tune. From those parameters I mention:

• Dimensions and geometry of the coils

• Magnetic field to be generated

• Thickness of the wires to be used

Where to start

The first thing one should determine for a coil system is the geometry of the coils. This is because the purpose of the coils is not just to generate a field, period. But it’s to generate a field over some region that is well defined for a specific experiment. One should know in advance what sample will be exposed to the magnetic field. If you don’t know that, you could spend days working on your coils, and eventually find out that your coils don’t fit your sample, or that your coils are so huge that you’re wasting so much power to generate fields you don’t need (math will show how important the size of your coils vs power is).

Therefore, start by defining the geometry of your coils. Once you know some characteristic radius or length of your coils, you have one step done. One more thing to pre-define, is how much wiring volume you want to use.

Optimum coil thickness

Any person starting with designing coils will have this question in mind: What’s the optimum wire thickness that should be used to have the optimum magnetic field. Guess what? If you know how much wire-volume you will use, then it doesn’t matter at all. Let’s do the math to verify this.

Suppose you have a slot to wind your wires that have a diameter $d$, and the slot has width $w$ and height $h$, as the figure shows. How can we optimally fill your wires in it? The optimal way is what the figure shows; it’s called “hexagonal packing”. This is in case we ignore third dimension’s problems, such as diagonal wires in the slots. This is a good approximation if the wires are wound carefully.

Cross section of packed wires (or packed cylinders)
Cross section of packed wires (or packed cylinders)

Notice that it’s very important to have a high number of circles per direction. This is to make sure that the flow of the current will be homogeneous in the volume. Imagine the opposite case, if you have one circle with diameter $d=h$. This will modify the magnetic field you expect from the wire as all the calculations of the magnetic field from a wire assume an infinitesimally thin wire.

With that said, let’s calculate the magnetic field from such a configuration. Any magnetic field generated from a current has the following form using the Biot-Savart law or Ampere’s law:

$$B=U\left(r\right)\cdot n\cdot I,$$

where $U\left(r\right)$ is a coefficient that depends on the dimensions of the coils with radius $r$, and $I$ is the electrical current, and $n$ is the number of total turns of the coil. We assumed that we know the geometry of your coil, meaning that $U\left(r\right)$ is fixed before starting this calculation. Let’s see how the wire diameter will modify your magnetic field.

From the definition of resistivity of a wire, we have

$$R=\rho\frac{\ell}{A}=\rho\frac{4\ell}{\pi d^{2}},$$

where R
is the resistance of the wire, $\ell$ is its length, $A=\pi\left(\frac{d} {2}\right)^{2}$ is its cross sectional area and $\rho$ is its resistivity. On the other hand, the power to be exerted on the coil is given by the formula

$$P=I^{2}R\quad\rightrightarrows \quad I=\sqrt{\frac{P}{R}},$$

and from the resistivity formula

$$I=\sqrt{\frac{P}{R}}=\frac{d}{2}\sqrt{\frac{\pi P}{\rho\ell}},$$

and we can put this in a form proportional to the magnetic field

$$B\propto n\cdot I=n\frac{d}{2}\sqrt{\frac{\pi P}{\rho\ell}}$$

There’s still one more question remaining to settle this; what’s the length of the wire required? The length of the wire depends on the diameter of the wire that will be used, since a thicker wire will occupy the area of the slot faster. Whether you wind your wire on a circle or a square, the length of the wire used will depend linearly on the side length or diameter of that geometry. If we call that $r$, then the length of the wire going to be $\ell=n\cdot k\cdot r$, where $k$ is $2\pi$ for a coil on the shape of a circle, and $4$ for a square coil, and $n$ is the number of turns. The formula of current then becomes:

$$n\cdot I=n\frac{d}{2}\sqrt{\frac{\pi P}{\rho\ell}}=n\frac{d}{2}\sqrt{\frac{\pi P}{\rho n\cdot k\cdot r}}=\frac{d}{2}\sqrt{\frac{\pi nP}{\rho\cdot k\cdot r}}.$$

Finally, the number of turns can be seen as the number of circles stacked in a hexagonal packing (as shown in the figure of the hexagonal packing). The number of circles (or cylindrical wires) with diameter $d$ packed in a slot of width $w$ and height $h$ can be approximated assuming a high number of circles (cylinders) per direction

$$n=\frac{w}{d}\cdot\frac{h}{d\cos30^{\circ}}=\frac{2}{\sqrt{3}}\frac{wh}{d^{2}},$$

where the factor $\cos30^{\circ}$ is simply due to the space saved with the hexagonal packing. In fact it doesn’t matter what packing you choose, since all packings will depend inversely on $d^{2}$, which is the important part. With this, the current formula becomes:

$$n\cdot I=\frac{d}{2}\sqrt{\frac{\pi nP}{\rho\cdot k\cdot r}}=\frac{1}{2}\sqrt{\frac{2\pi}{\sqrt{3}}}\sqrt{\frac{ whP}{\rho k r}}.$$

This result shows that the field doesn’t depend on the wire that we choose or the current we use. The magnetic field depends only on the power if the geometry of the coil is fixed.

One more important result here is that the power required is linearly proportional to $r$, which means that if your coils would have half the radius, half the power will be required. Thus it’s considerably important to make the coils as small as possible.

How to choose the wire diameter?

Having the magnetic field not depend on the wire thickness doesn’t mean we can choose it arbitrarily. In fact, the wire determines a very important characteristic, which is the voltage and amperage that we will need to generate. It doesn’t make sense to make a coil that will require 150 amperes and 0.1 volts, and that’s what the wire diameter determines for you. The thinner the wire is, the more resistance the coil will have, and thus more voltage will be required.

Thus, to fine-tune the production of your coil, decide the diameter of the wire after choosing the power supply that has the appropriate voltage and amperage.

Conclusion

It’s intuitive to think that the amount of current will tell how much field you’ll get, this is from simple, classical electromagnetism. However, in reality, it doesn’t really depend on the current. This is because the choice of the current depends on other parameters that are prioritized, such as the geometry of the coils. If the geometry of the coils is fixed, then it doesn’t matter how much current you put, but what matters is practically how much power one will put in.

This result is important for high field generators, since higher fields will entail higher power requirement, leading to heating problems. If your coils generate more heat that your system can dissipate, then it makes sense to think beforehand whether to whether the geometry should be made smaller or different, since smaller geometry means less power requirement.