In this tutorial, we’ll integrate a few most popular technical indicators into our Bitcoin trading bot to make it learn even better decisions while doing automated trades in the market.

In my previous tutorials, we already created a python custom Environment to trade Bitcoin, we wrote a Reinforcement Learning agent to do so. Also, we tested 3 different architectures (Dense, CNN, LSTM) and compared their performance, training durations, and tested their profitability. Actually, I thought that probably there are no traders or investors who would be doing blind trades without doing some kind of technical or fundamental analysis, more or less everyone uses technical indicators. So I thought ok if we can create a trading bot doing some kind of profitable trades just from Price Action, maybe we can use indicators to improve our bot accuracy and profitability by integrating indicators? …


Let’s improve our deep RL Bitcoin trading agent code to make even more money with a better reward strategy and by testing different model structures

In the last article, we used deep reinforcement learning to create a Bitcoin trading agent that could beat the market. Although our agent was profitable compared to random actions, the results weren’t all that impressive, so this time we’re going to step it up and we’ll try to implement few more improvements with the reward system and we’ll test how our profitability depends on Neural Network model structure. Although, we’ll test everything with our code and of course, you’ll be able to download everything from my Github repository.

Reward Optimization

Image for post
Image for post

I would like to mention, that while writing my reward strategy it was quite hard to find what reward strategies others use in reinforcement learning while implementing automated trading strategies. It’s quite hard to find what others do, and if I could find it, these strategies were poorly documented and quite complicated to understand. I believe that there are a lot of interesting and successful strategy solutions, but for this tutorial, I decided to rely on my own intuition and try my own strategy. …


In this tutorial, we will continue developing a Bitcoin trading bot, but this time instead of doing trades randomly, we’ll use the power of reinforcement learning.

The purpose of the previous and this tutorial is to experiment with state-of-the-art deep reinforcement learning technologies to see if we can create a profitable Bitcoin trading bot. There are many articles on the internet, that states, that neural networks can’t beat the market. However, recent advances within the field have shown that RL agents are typically capable of learning rather more than supervised learning agents within similar problem domains. For this reason, I’m writing these tutorials to experiment if it’s possible, and if it is however profitable we are able to build these trading bots.

While I won’t be creating anything quite as impressive as OpenAI engineers do, it is still not an easy task to trade Bitcoin profitably on a day-to-day basis and do it at a profit! …


In this part, we are going to extend the code written in my previous tutorial to render a visualization of the RL Bitcoin trading bot using Matplotlib and Python. If you haven't read my previous tutorial and you are not familiar with the code I wrote, I recommend first reading it before reading this tutorial further.

If you are unfamiliar with the matplotlib Python library, don’t worry. I will be going over the code step-by-step, so you could create your own custom visualization, or modify mine according to your needs. If you can’t understand everything, as always, this tutorial code is available on my GitHub repository.

Before moving forward, I will show you a short preview of what we are going to create in this tutorial:

Image for post
Image for post
gif by author generated with Matplotlib

At first look, it might look quite complicated, but actually, it’s not that hard. It’s just a couple of parameters updating on each environment step with some key information with the help of matplotlib already written functionality. …


In this tutorial, we will write a step-by-step foundation for our custom Bitcoin trading environment where we could do further development, tests and experiments.

In my previous LunarLander-v2 and BipedalWalker-v3 tutorials, I was gathering experience in writing Proximal Policy Optimization algorithms, to get some background to start developing my own RL environment. The mentioned tutorials were written upon OpenAI’s gym package, which allows us to experiment with our own reinforcement learning algorithms upon a ton of free environments to experiment with.

These OpenAI environments are great for learning, but eventually, we will want to set up an agent to solve the custom problem. To do this, we would need to create a custom environment, specific to our problem domain. When we have our custom environment, we can create a custom Reinforcement Learning agent to simulate crypto trades. …


In this tutorial, we’ll learn more about continuous Reinforcement Learning agents and how to teach BipedalWalker-v3 to walk!

First of all, I should mention that this tutorial is a continuation of my previous tutorial, where I covered PPO with discrete actions.

To develop a continuous action space Proximal Policy Optimization algorithm, first, we must understand what is the difference between them. Because LunarLander-v2 environment has also and continuous environment called LunarLanderContinuous-v2, I’ll mention what is the difference between them:

  • LunarLander-v2 has a Discrete(4) action space. This means, there are 4 outputs (left engine, right engine, main engine, and do nothing) and we send to the environment which one we want to execute. …


Getting Started

Let’s code from scratch a discrete Reinforcement Learning rocket landing agent!

Image for post
Image for post
(GIF by author)

Welcome to another part of my step-by-step reinforcement learning tutorial with gym and TensorFlow 2. I’ll show you how to implement a Reinforcement Learning algorithm known as Proximal Policy Optimization (PPO) for teaching an AI agent how to land a rocket (Lunarlander-v2). By the end of this tutorial, you’ll get an idea of how to apply an on-policy learning method in an actor-critic framework in order to learn navigating any discrete game environment, next followed by this tutorial I will create a similar tutorial with a continuous environment. …


How I created YOLOv4 Object Detection Counter-Strike Global Offensive aimbot

Welcome to my last YOLOv4 custom object detection tutorial, from the whole series. After giving you a lot of explanations about YOLO I decided to create something fun and interesting, which would be a Counter-Strike Global Offensive game aimbot. What is aimbot? Actually idea is to create an enemy detector, aim at that enemy and shoot it. The same idea we could use in real life for some kind of security system, for example, detect and recognize people around our private house and if it’s an unknown person aim lights at that person or turn on some kind of alarm. Also, I thought that it would be fun to create a water gun robot, that could aim to an approaching people and shoot the water, why water? …


This tutorial is one of the last tutorials from my YOLO object detection tutorial series. So, after covering I think almost everything about YOLO, I thought that it would be useful to implement something interesting and fun. Then I remembered my old Counter-Strike Global Offensive TensorFlow aimbot, where I used TensorFlow to detect enemies. This project was unsuccessful because when TensorFlow was receiving an image where it detects enemies there the bottleneck was coming. FPS was dropping to 4–5 frames per second, and it becomes impossible to play this game for our bot. …


This tutorial is a brief introduction to multiprocessing in Python. At the end of this tutorial, I will show how I use it to make TensorFlow and YOLO object detection to work faster.

What is multiprocessing? Multiprocessing refers to the ability of a system to support more than one processor at the same time. Applications in a multiprocessing system are broken into smaller routines that run independently. The operating system allocates these threads to the processors improving the performance of the system.

Image for post
Image for post

Why multiprocessing? Consider a computer system with a single processor. If it is assigned several processes at the same time, it will have to interrupt each task and switch briefly to another, to keep all of the processes going. …

About

Rokas Balsys

Welcome to my blog. This blog is focused on creating basic and advanced python tutorials for software developers, programmers, and engineers to learn.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store