Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Welcome to our wiki!
This is a collection of resources from past workshops that we have conducted, such as those from our Hackers Toolbox series. Feel free to use the information here for your own learning, experimentation and hacks.
Reach out to us if you want to conduct your own workshop!
This workshop is renamed and heavily lifted from the original workshop: Hacker Tools - Shell and Scripting, which itself is lifted from the
There are tons of ways for us to interface with modern computers, from beautiful GUIs and web applications. But that can only really get us so far. To fully utilize your computer to it's maximum potential and efficiency, learning how to use your a textual interface like your terminal is necessary.
Many applications and use cases are terminal first, as it is extremely easy to develop for and work with. No fancy framework or dependencies needed, just good ol' text input and outputs. In this specific case, we'll be going through Bash, a unix-like shell (or a POSIX-compatible shell).
Without going into too much detail about the history of Unix, we just need to understand that unix was a really great idea for how an operating system should be like, and it is what some modern operating systems today are born from. The important thing for us to know is that it gave us the rise of the Unix Philosophy, which guides a lot of how a terminal should work and function.
The Unix Philosophy can be summarized as such:
Write programs that do one thing and do it well.
Write programs to work together.
Write programs to handle text streams, because that is a universal interface.
and these principals, still guide a lot of applications today!
A bit more on modern, standard tools we use on the command line
Nearly all platforms you can get your hands on have a shell in one form or another, and many of them have several shells for you to choose from. While they may vary in the details, at their core they are all roughly the same: they allow you to run programs, give them input, and inspect their output in a semi-structured way.
-- Excerpt from the Missing Semester
For the most part, when people talk about the terminal, we normally talk about Bash, or POSIX-compliant shells. POSIX is just a fancy name for a set of rules that a shell should abide by, so that different shells can have similar behaviour. Other shells include:
It is very common for many Github workflows to involve:
(Optional) Creating a fork of a repository
Working on a feature/bug fix on a feature branch
Pushing the feature branch to Github
Creating a pull request of the feature branch to the main branch of the repository
Have some set of tests and automated checks start to verify the state of the pull request
We are going to replicate this workflow on the example application.
To avoid being overwhelmed with tasks, let's break down the "expected set of tests and automated checks" to be the following:
Run unit tests (calculator.test.ts)
Lint
Then, once the pull request is merged in main, we also want to (3) deploy it to Github Pages.
This section will introduce you to HTML, a markup language that is used to define and structure webpages. We'll start off by making a page and adding some elements, then adding more elements and giving those element properties (or attributes), add an HTML form to submit queries to Google Search, and wrap up with a look into Firefox's Browser Inspector tool.
fish
zsh
nushell
powershell
In modern operating systems, to open a shell prompt, you often need a terminal. Think of it as a nice GUI wrapping the textual interface (the shell). Your device should probably be shipped with one, or you should be able to install one easily.
When you first launch a terminal, you will see a prompt, similar to or a slight variation of:
This prompt tells you for example, that your username is chun on the machine named legion, and that you 'working directory' is ~ (short for home, we'll get to that). You should also see a blinking cursor, which you can type anything to, and when you hit Enter, it should execute the command you've typed out.
Here are some really simple commands you should be able to run (but some may not be installed depending on your machine):
date - shows the date
cal - a tiny calendar
uptime - shows how long your computer has been powered on
echo - echoes what you typed
These commands are neat, but we can't really do much in the command line until we understand the concept of folders and directories.
All your files and directories on your system are stored in a structure known as a tree. This tree starts from a 'root' directory, this will be / on Linux and MacOS, and something like C:\ on Windows.
A path on the shell is just a list of directories, seperated by / on Linux and macOS and \ on Windows. For example:
There are two types of paths:
The absolute path is a path that starts from the root directory
Relative paths are relative to your current working directory, or where you shell currently is
To see where you current directory is, use print working directory, or pwd for short in the terminal.
In a path, . refers to the current directory, and .. refers to the parent directory.
to get the manual pages of a command
to change directory
to list files and directories
to remove files and directories
to copy file
to move file
to print working directory
bash has shortcuts that are based on emacs keybindings:
Ctrl + a - beginning of line
Ctrl + e - end of line
Alt + b - move back one word
Alt + f - move forward one word
Ctrl + k - delete from cursor to end of line
Ctrl + _ - undo
And some special keybindings:
Ctrl + u - delete from cursor to the start of line
Ctrl + w - delete from cursor to the start of word
Ctrl + c - terminates the command
Ctrl + z - suspends the command
Ctrl + l - clears the screen
Ctrl + s - stops the output to the screen
Ctrl + q - allows output to the screen
You can find even more by doing man readline
There are a few ways you can make changes to your filesystem, be it editing files or directories/folders:
mkdir to make a new empty directory
touch <filename> to make a new empty file
nano <filename> to open a editor to edit the file
Ctrl + o - to save
Ctrl + x - to exit
What if we want to find our previously used commands? If you haven't already realized, you should be able to use the up arrow to scroll through your previous commands, but it's not very efficient. This is where we can use a command called history.
Remember what we said about the Unix Philosophy? A big part about it is the idea of programs working well together. The terminal allows this by allow the output of one program to be the input of another program. This is known as piping. To pipe we can do something like:
history prints out the entire command history as the output
grep takes in an input and tries to filter for the keyword "echo"
Some other ways you could use pipes:
There are more ways we can compose programs, which we'll go through in the scripting section!
Download and install VirtualBox
Download Ubuntu 24.04 ISO file
Linux is a Unix-like operating system kernel, known for being the most popular kernel in the world. It's widely used in various devices and systems, including Android smartphones, Chromebooks, most routers, servers, and even supercomputers.
Unix-like systems, including Linux, are at the heart of the most popular operating system family in the world. Their architecture and principles have influenced countless other systems, making them a staple in the world of computing.
If you're a computing student, sooner or later, you'll find yourself developing for a Unix-like platform!
A virtual machine (VM) is a simulated computer that allows you to run an operating system and applications in a completely isolated environment. You can configure a guest virtual machine with any operating system and settings you want, and use it without affecting your host environment.
Virtual machines are incredibly useful for several reasons:
Experimentation: They allow you to test out different operating systems, software, and configurations without the risk of damaging your main system.
Software Compatibility: You can run software that is only compatible with a specific operating system by creating a VM with that OS.
Safe Testing: If you need to experiment with potentially malicious software, a VM provides a safe, isolated environment to do so.
Isolation: A VM isolates the guest environment from the host, which means you can run buggy or untrusted software with a reasonable level of safety.
Snapshots: VMs can take snapshots, which capture the entire machine's state at a particular point in time. This allows you to make changes, test configurations, or install software and then easily revert to a previous state if something goes wrong.
Performance: VMs are generally slower compared to running an operating system directly on your hardware (bare metal).
Resource Competition: VMs share the host system's resources, such as CPU, memory, and storage, which can impact performance.
Unsuitability for Certain Applications: VMs might not be ideal for resource-intensive applications like games or high-performance computing tasks.
We're using VirtualBox because it offers several advantages:
Free and Open-Source Software (FOSS): VirtualBox is completely free and open-source, making it accessible to everyone.
Graphical User Interface (GUI): It comes with a user-friendly graphical interface, making it easier to use, especially for beginners.
Cross-Platform: VirtualBox works on multiple operating systems, including Windows and Linux. There is a developer preview for M1+ Macs, but the performance isn't great.
1. VirtualBox Main UI
Once you open VirtualBox, click on the "Add" button to start creating a new virtual machine.
2. Creating a New VM
Name: Enter "Ubuntu" as the name of your new virtual machine. VirtualBox should automatically detect the type (Linux) and version (Ubuntu) based on the name.
3. Set the Amount of Memory (RAM)
Memory Allocation: Ubuntu requires a minimum of 512 MiB of RAM and recommends 2 GiB. However, as a general rule, do not allocate more than 1/4 of your physical RAM to the virtual machine to ensure that your host system runs smoothly.
4. Create a Virtual Hard Disk
Step 1: Your virtual machine needs a virtual hard disk. Click on "Create" to begin setting it up.
Step 2: Use the default virtual hard disk format for the best performance.
Step 3: Opt for a dynamically allocated disk. This means the virtual hard disk will only use as much space as it currently needs, rather than reserving a large chunk of your storage upfront.
Step 4: Ubuntu recommends a minimum of 10 GiB of storage and 25 GiB for a full installation. For this guide, we'll be using the minimum installation, which will require about 6 GiB.
After setting up the virtual hard disk, return to the main VirtualBox interface and click on "Settings".
Storage Settings: Navigate to "Storage", then select the "Empty" slot under "Controller: IDE". Click on the disc icon beside "IDE Secondary Master", and then choose "Choose Virtual Optical Disk File".
Select the Ubuntu ISO file that you have downloaded earlier.
You're all set up with VirtualBox! You can further customize your settings later if needed. For now, return to the main UI and click "Start" to launch your new virtual machine.
Skip this part if you're not familiar with working in a command line environment
We can easily do exactly what we just did in a matter of seconds, using the command line interface (cli).
Once your virtual machine starts and boots up, you should see a screen like this:
Step 1: Choose "Install Ubuntu" to begin the installation process.
Step 2: Select your keyboard layout. If you're using a computer purchased in Singapore, the default layout should be "English (US)."
Step 3: Opt for a "Minimal Installation" to save time during the installation process. You can leave the checkboxes unticked for a quicker setup.
Step 4: Choose "Erase disk and install Ubuntu" to proceed with the installation. When the dialogue box appears, click "Continue" to confirm.
Step 5: Ubuntu should automatically detect your location as Singapore. If it doesn't, you can manually select your location.
Step 6: Enter your name and create a password for your Ubuntu installation.
Step 7: Now, just sit back and relax while Ubuntu installs. This may take some time, so feel free to take a break while the installation completes.
Many Linux distributions offer Guest Additions through their package repositories, making the installation process straightforward. To install them:
Open a terminal in your Ubuntu virtual machine.
Run the following commands:
This will install the necessary utilities and drivers to enable Guest Additions features.
If you're using an operating system like Windows or another OS that doesn't have Guest Additions available in its package repository, you can install them using a CD image:
In VirtualBox, go to the "Devices" menu.
Select "Insert Guest Additions CD image...".
This will mount the Guest Additions as a virtual CD in your VM.
Follow the on-screen instructions to install the software.
One cool feature of Guest Additions is shared folders. This provides a useful interface for sharing files between the host machine and virtual machine.
Click on Settings > Shared Folders
Click on the folder icon on the right
The folder path should point to the location on the host machine
Give the folder a folder name, let's say "SharedFolder"
Now in your virtual machine, run the following command
Now if you drag a folder into your shared folder in your host, you should see it appear in your virtual machine as well!
Sometimes, you might need to run unstable software on your virtual machine, which could cause the VM to hang or become unresponsive. If this happens, you can force a shutdown:
Step 1: Close the VirtualBox window for your VM.
Step 2: When prompted, choose "Power off the machine." This will force the VM to shut down immediately.
Step 3: You can quickly bring the VM back up by starting it again from the main VirtualBox interface.
You don't always need to completely shut down the operating system inside your VM when you're done working:
Step 1: To pause and save the VM's current state, simply close the VM's window.
Step 2: Choose "Save the machine state" from the options. This will pause the VM, allowing you to resume exactly where you left off the next time you start it.
A snapshot is like a time machine for your VM. It captures the state of your virtual machine at a specific point in time. You can return to this state later if needed, making snapshots very useful for experimentation or testing.
To take a snapshot:
Step 1: In the VirtualBox main interface, click on the list icon beside your VM name (e.g., "Ubuntu").
Step 2: Select "Snapshots" to view, take, or manage snapshots.
If you want to restore your VM to a previous snapshot:
Step 1: Ensure your VM is shut down. If necessary, either shut down the OS within the VM or close the VM and select "Power off the machine."
Step 2: In the list of snapshots, select the one you want to restore.
Step 3: Click "Restore" to revert your VM to the selected snapshot.
Running risky programs or commands
SSH with VSCode
Experimenting with low level programs that might break your computer
All these are quite nice if you're working with a bare shell, or an uncustomized shell. However, in this day and age, there's a lot of new commands and tools to enhance your terminal experiences.
Before we get started with tools, we need to know how to install them and also how to learn what they do. For commands, we can often pass in flags to tell the program how we want it to run. One of the universal flags is the --help flag.
To install a program, we often times use what is known as a package manager. This allows us to search and install packages without having to googling the tool and trying to find the correct downloadable file
If you're on Linux or WSL, you should have a package manager installed. If you're on Ubuntu/Debian-based distros, this should be apt. If you're on anything else, you should try and figure out what the package manager is based on your distro.
If you're on MacOS, you'll need to install brew:
Now to install a program, you can just do:
You will need to update the package lists before installing a package!
Once installed, do:
fzf stands for fuzzy finder. It allows you to find anything with a fuzzy search (you can make spelling errors). If you just run fzf, it will do a fuzzy find on your current directory
But we can do so much more than that! To do so, we need to add some keybindings by running this command:
Now try out these new keybindings:
CTRL-T - Paste the selected files and directories onto the command-line
CTRL-R - Paste the selected command from history onto the command-line
ALT-C - cd into the selected directory
Now notice that if you quit the terminal or start a new terminal, these keybindings won't be available. To make the change permanent, we need to save it into a config file. For bash, this config file is is at ~/.bashrc
So we can find specific files and directories, but what about finding specific contents within a file? ripgrep is a program that aims to solve this.
It is quite troublesome so jump around directories, especially if you're copying something, or the path is really, really, really long. A good way around this is to have a program guess what directory you want to jump to based on keywords and frequency of which directory you jump to! That is exactly what zoxide does.
You can use z to jump to directories, similar to how you use cd. For example, if you have a directory you frequently go to, like:
You could do something like z funny to jump into it.
According to , CI/CD is defined as:
(CI) refers to the practice of and frequently integrating code changes into a shared source code repository. and/or deployment (CD) is a 2 part process that refers to the integration, testing, and delivery of code changes. Continuous delivery stops short of automatic production deployment, while continuous deployment automatically releases the updates into the production environment.
Essentially, after writing code, you can think of CI/CD as the pipeline in which brings your code from local to production in an (almost) automated fashion.
For this guide, Mozilla Firefox will be our browser of choice because of its powerful, wide-ranging and easy-to-use developer tools. Feel free to use any browser of your choice, keeping in mind that not all features may be available in the same form as with Firefox.
When first learning JavaScript, it is customary to open a blank page on your browser to give your Browser Console a clean slate. To do so, open a new tab in your browser and type the following:
and press enter. You should see a completely blank white (or black, based on your settings) page.
chun@legion:~$Sun Sep 8 08:23:13 PM +08 2024` September 2024
Su Mo Tu We Th Fr Sa
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30
20:26:04 up 1:01, 2 users, load average: 0.45, 0.62, 0.62chun@legion:~$ echo hello
helloC:\User\user\Documents - For Windows
~/Downloads - For Linux or MacOSchun@legion:~$ historyhistory | grep "echo"history | head ## Grabs the first 10 lines of the output
history | tail ## Grabs the last 10 lines of the output
## Opens output in a scrollable format, use you arrow keys to navigate
## and press q to quit
history | less Use v to select all the commits you want in your interactive rebase
Hit e to start the interactive rebase
While hovering over a commit
p to pick
d to drop
e to edit
s to squash
<C-j> or <C-k> to rearrange your commits
This guide aims to progressively introduce various concepts necessary to start using Git to manage your project.
Before starting the guide, it is highly recommended that you read up on javascript and CSS. It will be good to have some React knowledge though this guide will provide enough guidance even without React knowledge.
We cover some of the more advanced concepts of Git and Github for those who are interested (they are not important for most day-to-day applications of Git).
After going through the guided tour from Basics of Github Actions, you should have a general grasp of how Github Actions works and how workflows are constructed.
Now, you might be wondering how Github Actions can be used beyond the basics of building CI/CD pipelines.
In this section, we will focus on discussing several advanced use cases of Github Actions. This is by no means an exhaustive list of what Github Actions can achieve, but it hopes to broaden how you think of Github Actions and use Github Actions.
Rather than having a central example application like Basics of Github Actions, we will be presenting each use case as individual examples and linked to some live repository to better illustrate how it has been used. We trust that you have the understanding and mental models developed to combine these use cases with what we have previously covered to create infinitely many powerful workflows!
We will cover the following use cases:
Creating pollers with Github Actions
Using Github script inside workflows
Executing third-party scripts
Reusing workflows
While the Fork and PR Workflow is the recommended workflow, you could also work collaboratively by creating PRs through the original repository (i.e. no forks involved).
This may be simpler to setup for smaller projects (like Orbital), but we still highly recommend following the Fork and PR Workflow instead.
This workflow is very straightforward:
Clone the original repository
Create a local branch
Make changes to the local branch
Push the local branch to the original repository
Make a PR from the local branch to the main branch
While it is very simple, it is also very error prone as pushing directly to the original repository may result in directly overriding changes in the original repository if you are not careful and you may not even have the permissions to push directly to the original repository (this is the case for most open-source projects).



















VBoxManage createvm --name "Ubuntu" --ostype Ubuntu24_LTS_64 --register
VBoxManage modifyvm "Ubuntu" --memory 2048 --acpi on --boot1 dvd --nic1 nat
VBoxManage createhd --filename "Ubuntu.vdi" --size 40960
VBoxManage storagectl "Ubuntu" --name "IDE Controller" --add ide --controller PIIX4
VBoxManage storageattach "Ubuntu" --storagectl "IDE Controller" --port 0 --device 0 --type hdd --medium "Ubuntu.vdi"
VBoxManage storageattach "Ubuntu" --storagectl "IDE Controller" --port 0 --device 1 --type dvddrive --medium /full/path/to/iso.iso
VBoxManage modifyvm "Ubuntu" --cpus 4
VBoxManage modifyvm "Ubuntu" --vrde on
VBoxHeadless --startvm "Ubuntu"sudo -i
apt update
apt install virtualbox-guest-utils virtualbox-guest-x11mkdir ~/Shared
sudo mount -t vboxsf SharedFolder ~/Shared/ -o uid=1000,gid=1000Well, the key thing is that Google Docs, as the name suggests, is very much confined to a single document. Well, that can be easily solved, we can just Google Docify our folders right? Here's an example of how the Google Docs method of version control starts to fall apart.
Google Docs takes a snapshot of your document every n amount of time, and tries to "blame" each change on someone (every character/line changed has to be attributed to someone).
Imagine I'm helping modify a cake recipe in Google Docs:
At x point in time, a snapshot is taken. There is a line that says "Add 5g of sugar"
At x + 1 point in time, I decide that's too much sugar, I make a change to the line to "Add 3g of sugar"
At x + 2 point in time, Person Y accidentally sits on his keyboard while the docs is open and replaces the line with "Add 2348g of sugar"
At x + 3 point in time, a snapshot is taken. Google Docs versioning now shows that Person Y has changed 5g to 2348g of sugar, and my changes are lost to time.
Now imagine this problem on a large codebase of millions of lines, with hundreds of engineers contributing to different parts of this file. How can one prevent something like this? We want each and every change to be well documented, justified, and more importantly, reversible. These are the guarantee version control like Git provides.
A commit is a snapshot of the entire repository at a point in time, plus some metadata. More specifically, it contains:
A hash, or a unique (kinda) identifier for a commit, sort of like your student ID
The author of the commit, the email of the author, and the time of the commitds
Each commit also has a parent commit (except the first commit)
We can "chain" commits by following the parent commit till we hit the first commit. If we do this for every commit, we get a directed acyclic graph
Directed: Commits point to their parents
Acyclic: There cannot be commit cycles.
As seen in the diagram above, it is primarily responsible for the following:
Building your project
Running unit and (maybe) integration tests
Deployments to production
A CI/CD pipeline may not include every step. For instance, you might only want the CI/CD pipeline to run unit tests, or perform linting for a pull request. It is not a one-size-fits-all mechanism, but rather a "pick as you go" approach.
CI/CD pipelines are often built as part of the version control systems. This means that when you push your code onto a repository, the CI/CD pipeline will start.
Some common CI/CD software include:
TravisCI
CircleCI
Jenkins
Github Actions
One of the most common CI/CD tools is Github Actions due to Github's pervasiveness in personal, open-source, and commercial software.
Github Actions was first released in 2018, and it aims to be a tightly integrated CI/CD tool that works out-of-the-box with Github repositories.
It is designed to integrate with existing Github flows, reducing the overhead involved in setting up a CI/CD pipeline.
Github Actions also goes beyond simple CI/CD pipelines as it can integrate with other Github events, such as running on a fork, issue created, or release.
The rest of this guide will cover the core syntax and concepts of Github Actions, common workflows you can achieve with Github Actions, and some other slightly unconventional workflows that you can achieve with Github Actions.

To open up the Browser Developer Tools, navigate to a blank page and just use the keyboard shortcut f12 or fn + f12 (based on your keybinds). This should open up the Dev Tools window, which is resizable and can be placed on any side of the screen you want (except the top).
You'll see a few tabs at the top of the window labelled with names like Inspector, Console and Style Editor (depending on the browser). Navigate to the Console tab by clicking on it. You should see a, well, console with your typing cursor targetted on the first line. This is where you will write all your JavaScript code for the first part of this guide.
Next you'll get started with using the console to execute some JavaScript code.

There may be times where you wish to have a workflow run at a fixed duration. For instance, using a workflow to fetch and update a set of data everyday. Github Actions supports such workflows by offering the schedule event type that triggers a workflow.
To declare such a workflow, use the schedule event type along with the cron key, specify a cron schedule format:
on:
schedule:
# * is a special character in YAML so you have to quote this string
- cron: '30 5,17 * * *'From the official Github Actions documentation on the schedule event, you would specify the cron schedule and this will cause the workflow to be triggered at the given timing. You could even schedule it multiple times a day or across different times.
Once again, taken from the official Github Actions documentation.
You can use this "poller" pattern in conjunction with some of the next use cases to really power up your workflows. We will discuss them as we go.
There are several restrictions to this event type:
There may be delays to when exactly the workflow runs due to an increase in workload
This only works for workflows located on the (this may change, so it's not always main)
This section of the guide will cover how JavaScript can be integrated with HTML and CSS to add functionality and interactivity in a page. We'll take a look at the DOM(Document Object Model), events, how to query and manipulate the DOM, and end with fetch requests.
DOM stands for Document Object Model and is a way of structuring an html document as a tree. This makes it easier for the browser to update the document's styling and when JS is applied.
Take the following html document:
<!DOCTYPE html>
<html>
<head>
<link rel="stylesheet" href="styles.css">
<title>My web page</title>
</head>
<body>
<h1>Hello world</h1>
<button onclick="alert('Hello!')">Say hello</button>
</body>
</html>This html document is converted into a tree like the one below:
It is not important to know how this is done, but it is important to understand that every html document is converted into this model by the browser to allow for easy querying and updating of the elements.
Events are, quite literally, "things that happen". In this context, they are "things that happen to element(s) on the page". Examples of events are "click", "hover" and "keyup" (when the user presses a key and lets go, letting the key come up).
Events attributes can be assigned to html elements to execute some JS code when the event occurs. Take an example from the above html:
This sets an onclick attribute for the button; this means when the button is clicked, the JS code is executed. In this case, the code is alert('Hello!'), so when the button is clicked, the user gets a popup with the text "Hello!".
There is a full list of JS events .
Copy the above html into a separate html file and save it with a name of your choice. Then add some CSS of your choice, or reuse the styles.css file from the previous section. You can even go in with a combination of the two, and define some additonal styles in a separate CSS file, then import both files into the html file.
For the next few sections we'll look at querying the DOM from the Browser Console.
This guide is a step-by-step introduction to React.js, one of the most powerful libraries for building modern, interactive web applications. With its component-based architecture, virtual DOM, and declarative approach, React has changed how developers build for the web.
Throughout this guide, you’ll learn core concepts through hands-on examples and progressively build your understanding of React’s capabilities. Here’s what we’ll cover:
State and JSX interaction
Component structure and file organization
Props and dynamic rendering
Events, spread operator, and preventing defaults
API calls with fetch and async handling
Side effects with useEffect
Third-party libraries and custom components
Reading docs and using external tools
To get the most out of this guide, you should have a basic understanding of:
and
HTML and CSS
If you’re not comfortable with these yet, we recommend brushing up on the syntax before diving in.
Please refer to this .
Remember how to open the Browser Developer Tools? Good, you'll need it.
Press f12 or fn + f12 to open up the Dev Tools. On Firefox, this by default opens the Inspector tab, which is where the page's HTML can be seen. Let's use Google's homepage as an example for now.
You'll see 3 windows in the tab (on Firefox at least).
The leftmost window (in the picture above) contains the HTML of the page. It allows to read the HTML, see elements, look at what tags they are using and what attributes they are assigned, and more. It also allows you to look for hidden elements - elements that aren't visible on the page but are present all the same. You can even edit the HTML by double-clicking at a certain part of the code to change its value temporarily and see (in real time) how it affects the contents of the page.
The middle window allows you to see the CSS properties of a selected element on the page. We'll get into CSS in the next section, and there this will be handy since you can inspect the CSS of any element on any webpage, as well as edit styles to see what effect the changes have on the page in real time.
The rightmost window allows you to see the Box model of a selected element, Flexbox of a selected element and Grid layout of the page. When we get into margin and padding later on, the Box model will become a useful visualising tool, but for now we have nothing to do with it.
If you want to select an element on a page, there are 2 ways. The first is to read through the HTML and try and figure out which element is the one you want. This however, is going to be difficult when pages get long and complicated. Just look at this excerpt from the Google homepage:
The second way, and better, is to use the element picker. You'll notice a cursor icon next to the "Inspector" tab title. This is the element picker.
Click on it, and when you move your cursor over the webpage you'll be able to highlight an element. Click on the element or part of the page you wish to look at in detail, and the element will get selected, showing its style and HTML in the Inspector.
Remember how you can temporarily edit the HTML and CSS of the page? Combine that with the element picker, and you have a powerful set of tools to test out CSS styles and HTML elements in real time for development. Here's an example of me playing around on the Google homepage using these tools:
Once you reload the page, all the temporary changes you make are removed and the page is restored to its original, unaltered state.
We've made a pretty nice webpage so far, but it seems... plain. It needs some style, some color, a different font maybe.
In the next section, we'll learn some CSS to add some styling to our page.
However, sometimes what you want goes beyond accessing the Github API. You might want to run a Python script as part of the workflow, calling other APIs. To do so, you can simply treat them as regular files in a filesystem (think back to how jobs run in virtual machine runners) and call these scripts.
The caveat is that you have to setup the job's virtual machine runner to support the third-party script's language. So, if you're using Python, you need to ensure that Python and all of the scripts' dependencies are installed. If you're using Javascript, ensure that Node.js and all of the project dependencies are installed.
Since we've covered how to use Node.js in Github Actions in Basics of Github Actions already, we will focus on setting up the job virtual machine runner to work with Python scripts this time.
Essentially, what you need to do is to:
Fetch the current repository
Setup Python using the actions/setup-python@v5 action
Install all of the Python dependencies from a requirements.txt in the current repository (or individual dependencies)
Execute the Python script
It's that simple! Now, the script.py Python script will start to execute and now it will have full filesystem access to the job's virtual machine runner. You can additionally set environment variables for the script to access via env.
If you are attempting to run a third-party script every time a pull_request event occurs and want to read any repository secrets or access the GITHUB_TOKEN token, make sure you use the pull_request_target event instead. The pull_request event is susceptible to having untrusted scripts accessing this secure information, so for security reasons, Github has disabled its access to these values. However, pull_request_target does not suffer from such limitations!
Read more about it here:
You may combine this use case with the previous two to create scheduled scripts that run and interact with the Github API!
Continuous Integration and Continuous Deployment (CI/CD) is the cornerstone to many modern day software projects. Automating the process the building, testing, and deploying code tightens the software release cycle, improving the software delivery times and increasing the reliability and consistency of software.
In this workshop, we will be covering the following:
What is CI/CD?
What is Github Actions?
Anatomy of Github Actions
Implementing a CI/CD pipeline on an example application
As we discuss the types of workflows you can achieve with Github Actions, we will start unpacking the various concepts in Github Actions.
To start using Github Actions for CI/CD, ensure that the following are properly setup:
Create a Github account. You can refer to .
Download Git to your local machine and set it up. You may refer to our installation guide .
Optionally, download the
This guide will assume that you have some basic understanding of what Github is and some of its core behavior such as repositories, issues, pull requests, etc. We will also be using Javascript as the basic language for all examples for its accessibility. If you have never used Javascript before, you can refer to for a quick refresher. However, this guide does not require in-depth knowledge of Javascript as we will be focusing on writing Github Actions instead.
This section will explore CSS to help us add some style to our pages.
CSS, which stands for Cascading StyleSheets, is a stylesheet language that is used to specify the styling of an HTML document. This is done by adding properties and values to elements by the way of selectors. Some CSS would look like this:
As you can see, the element that you want to style is at the top, with the properties enclosed in curly braces ({}). The properties are assigned values by the way of property-name: value; and each line ends with a semi-colon.
CSS can be used to assign styles to elements that have the same tag, class, and/or id, and there are ways to give styling for particular events (for example, what a button temporarily looks like when the user hovers over it).
There are 2 ways to add CSS styling to an HTML page. The first is to use in-line styling by way of the <style> tag. In the head of the HTML file, add the opening and closing <style> tags. Then you can place the CSS inside of tag.
The second way is to create a new file, add the CSS there, and save it with the .css file extension. Then it can be imported into the html document using the unpaired <link> tag:
There is a third way, which involves using the style attribute to assign styling to a specific element but this is only used in rare cases when just 1 or 2 properties need to be defined for a very specific element, and even then it is recommended to instead use method 2 and give the element an id attribute.
Get the index.html file ready, as we're now going to add some styling to that. Note that we will stick to method 2 of adding CSS (create a new file and import) throughout this guide. First, create a file called styles.css and place it in the same folder as the index.html file. The add the following line to the head of the index.html document:
Now, you are ready to add some styling. Let's get started with some core concepts.
This guide was created as an effort by NUS Hackers to make knowledge easily available for various technical topics!
This section of the guide will cover JavaScript, from its syntax and constructs to its use in frontend web development. Below are the slides containing an abridged version of the guide (originally intended to accompany a live workshop), along with a link to a repository that holds some sample code as well as sample solutions to exercises posed throughout this guide.
JavaScript is a high-level programming language that is often combined with HTML and CSS to enhance the frontend of a browser-based application. Its main uses are interactive elements, frontend input validation and fetch requests.
JavaScript as a language has influenced web development (and programming as a whole) a lot since its inception, with other languages being developed to add functionality to it (such as TypeScript, which is a typed version of JavaScript).
What makes JavaScript rather unique is that it does not have its own compiler interpreter. Java has the JVM, Python has its installable interpreter and C has the clang compiler. But JavaScript instead relies on a web browser for execution.
Another feature is non-mandatory semi-colons at the end of each line: you can either omit or leave in a semi-colon at the end of each line in JavaScript. The browser will still be able to understand it either way. However by convention it is recommended to place semi-colons at the end of every line, as this guide will follow.
Do you have a web browser that is not Internet Explorer? Do you have an IDE (or other text/code editor) installed on your computer? If you answered yes to both questions, then you are ready to started coding! If not, then install a browser as well as a text editor or IDE of your choice. The browser of choice is Mozilla Firefox, but feel free to use any browser you like.
Next, we will look at how the Browser Developer Tools will aid us throughout the development cycle when using JavaScript, HTML and CSS (to be covered in brief in this guide).
Certain projects may include private files (like secrets) or downloaded content (like dependencies). Such files may contain very sensitive information or very large amounts of data and they should not be included in ANY snapshots of the project.
This is where ignoring files with .gitignore comes into play.
To start, let's create a new file secrets.txt:
If you run git status, you will notice that Git prompts you to stage secrets.txt. But we don't want that to happen. So we can add a file .gitignore and add the path to secrets.txt:
Then, when you run git status again, you will notice that Git no longer prompts you to stage secrets.txt. Wonderful!
You can find a set of predefined .gitignore files here:
We highly recommend going with them for any project so that you are not redefining/missing any common files and folders that should be ignored.
When creating a repository on Github, can select a predefined .gitignore to be added in the repository so you can save a step of adding the file
Typically, we ignore files like build artifacts and generated files that are usually derived from the human-authored code in the repository.
Dependency caches like /node_modules
Compiled code like .o, .pyc files
Build output directories like /bin, /out
Let us first understand some of the key terminologies and ideas behind Git to help improve your intuition of Git.
Git is a version control system first created by Linus Torvalds with the aim of managing software changes over time. It is not the first version control system (SVN and Mercurial existed before it) but it is one of the most commonly used ones.
At a high level, version control systems track the "history of changes" of a piece of software over time. Git, in particular, emphasizes the use of a decentralized collaborative workflow, allowing teams to collaborate and work on a codebase without an active connection to the centralized repository.
To track a codebase, Git relies on a system of commits.
You can think of a commit as a snapshot of the instance of the codebase at a given point in time. For instance, when you are fixing a bug or implementing a new feature, you may want to save the current state of the codebase (take a snapshot). Every time you take a snapshot, it gets added over the previous snapshot as a set of changes that were introduced in the new version of the codebase.
Internally, Git tracks these commits by creating an , with every commit representing a node in the graph and every edge points back to the previous commit that occurred.
You may notice that each commit node may have more than one incoming edge. This is where the idea of branching stems from.
Suppose that you were working on some changes when a bug report comes in and you have to urgently fix it. You don't know if the bug fix works immediately so you don't want to work on the bug fix in the same location where you're working on your changes. This is when branching comes in handy.
Unofficially, you can try thinking of a branch as an independent line of work that stems (or branches off) from a point in development. They can be seen by the nodes C2 <- C3 <- C5 in the previous diagram. They let you work on features or bug fixes without interfering with the current set of changes.
By default, Git starts out with a main branch.
If you installed Git before 2020, your default branch may be master instead. To change the name of your default branch, you can use the following command:
More information about branching is covered under .
Recall that in , you had the option to create a private repository. Private repositories allow you to work on projects without having the source code be publicly available such as when you are building an closed source project.
However, if you just created a private repository, your friends cannot join the repository and contribute to it. To allow them to view and contribute to the repository, you will need to add them as a collaborator to the project.
To do so, navigate to the "Settings" tab and under "Access" > "Collaborators", you will see this page:
Then, select "Add people" and enter your friend's Github username. They will receive an email notification about the invite which they have to accept. Once done, they will be able to view the private repository.
For this workshop, you'll also need lazygit. Follow the instructions here
In many projects, commits are more than a snapshot. It should represent a working state of the repository. That means, some projects don't really like it if you have a typo, then you have to create a separate commit to fix said typo.
So then, how do we rewrite the commit to fix the typo? That's where commit manipulation comes in.
The core of Github Actions are workflow files found in a repository. These workflow files are stored in the folder .github/workflows and are automatically read by Github.
Before we dive into the common workflows in Github Actions, let's first understand the high-level anatomy of Github Actions.
Workflows are configurable automated processes designed to run when an event occurs. These may include things creating a pull request or when an issue is created. A single workflow may be triggered by different events, and it may have certain restrictions being placed on it (for example, a workflow triggered by a pull request will only run when the target branch is the
An HTML form allows users to submit data to a server for processing. Let's add some functionality to our previous HTML form to allow queries to Google Search.
First, we need to add some attributes to the <form> tag. But what attributes does it accept? Let's look at two of them.
If you're following from the workshop, here are the slides!
The act of providing or serving digital content or an online service typically delivered by a business.
The service or content is generally served locally from your own hardware. Often "self-hosters" use older Enterprise-grade hardware from their home internet connections however they also use other hosting providers hardware. This is still considered self-hosting.
While people who get into self-hosting often use their own hardware, using hosting providers help abstract away the hardware difficulties while getting you 90% of the way there.
Selectors are, as you can guess, ways to select elements. These selectors apply to both CSS and JavaScript later on.
The first way to select a group of elements is to select them by name. An example use in CSS would be this:
Here, the styling will be applied to all <p> elements in the html file. This works with any valid html tag, and the general syntax is:
It's convention to write a program to print out "hello world" when first learning a language, so let's do exactly that. In the browser console that you opened up earlier, type in the following:
And hit enter. You should see "hello world" printed (without the quotation marks) in the line below, and undefined just below it (on Firefox). Here, undefined is the return value of the expression that you just evaluated. In general, assignment and output expressions return undefined, as do functions that have no return value.
Consider the following:
It looks like a mix of javascript and HTML, and it is called JSX, which you will see a lot in React based applications. It contains an opening tag and a closing tag.
In React Native, the idea behind it the same, except the syntax might be slightly different:
In this example, we declare a variable name and embed it inside the JSX element with curly brackets:
JSX can contain children:
Oh no! I've committed a secret to my repository, and pushed the changes!
If you alreay pushed the changes, you might want to revoke the secret.
Let's go back to our button and try to access and change its style. To get its style, we can use the style property:
You should get a CSSProperties object.
To access a particular property, use dot notation again to access the property. Here are some common ones:
In general, element.style.propertyName should return the value for that property.
Github Actions is a Continuous Integration/Continuous Delivery (CI/CD) platform. CI/CD is essentially an automation of the standard software development lifecycle. Rather than having to manually perform tasks like linting, formatting, and deployment, CI/CD platforms automate the process and allow you, as the developer, to focus on developing meaningful products, rather than worrying about fulfilling a checklist of actions.
You can use Github Actions to build a continuous integration and continuous delivery/deployment pipeline.
Every repo can have many workflows in the .github/workflows directory. You need to create the directory at the root folder. Each workflow is a .yml file using the YAML syntax. Each workflow can have a set of Jobs which itself has a set of steps. Each step is a single 'command' that is ran in order and depends on the step before it.
Official Github Actions documentation:
about:blankchun@legion:~$ cat --helpsudo apt-get update
sudo apt-get install <package-name>brew install <package-name>sudo apt-get install fzfbrew install fzffzfeval "$(fzf --bash)"nano ~/.bashrc
## Inside the editor, add this line
eval "$(fzf --bash)"sudo apt-get install ripgrepbrew install ripgrep# Find all python files where I used the requests library
rg -t py 'import requests'
# Find all files (including hidden files) without a shebang line
rg -u --files-without-match "^#\!"
# Find all matches of foo and print the following 5 lines
rg foo -A 5
# Print statistics of matches (# of matched lines and files )
rg --stats PATTERNcurl -sSfL https://raw.githubusercontent.com/ajeetdsouza/zoxide/main/install.sh | shbrew install zoxide /home/user/downloads/temp_dir/funny_projecton:
schedule:
- cron: '30 5 * * 1,3'
- cron: '30 5 * * 2,4'
jobs:
test_schedule:
runs-on: ubuntu-latest
steps:
- name: Not on Monday or Wednesday
if: github.event.schedule != '30 5 * * 1,3'
run: echo "This step will be skipped on Monday and Wednesday"
- name: Every time
run: echo "This step will always run"on: [push]
jobs:
autograding:
permissions: write-all
runs-on: ubuntu-22.04
steps:
- name: Fetch repository
uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v5
with:
python-version: "3.13"
- name: Installing Python dependencies
run: |
pip install -r requirements.txt
- name: Run Python
run: |
python3 script.py
env:
base_repository: ${{ inputs.repository }}
is_local: ${{ inputs.is_local }}
repository_name: ${{ env.REPO_NAME }}element {
property1: value;
property2: value;
}touch secrets.txtExpo is a set of tools and services built around React Native and, while it has many features, the most relevant feature for us right now is that it can get you writing a React Native app within minutes. You will only need a recent version of Node.js and a phone or emulator.
In other words, Expo abstracts away a lot of the nitty gritty details of dealing with React Native. You can focus on building the app instead of spending time on configurations.
Follow the instructions here.
Follow the instructions here.
Open the project in the IDE of your choice (eg. VSCode)
Open a new terminal window
Run npx expo start
Press i to bring up your iOS emulator (only for MacOS) or press a to bring up your Android emulator
Try editing some text and see the changes in real time!
These workflows are disabled in repositories with no activity in 60 days
Callbacks, destructuring, and immutability
Lifting state up
Persisting data with a backend
Deploying your app
Once the pipeline is set up properly, you can easily deploy your code to production automatically, saving a step in the development process.
Here are a few examples of Github Actions:
A curated list of awesome things related to GitHub Actions: https://github.com/sdras/awesome-actions
Amending a commit adds some changes to an additional commit and "destructively" replaces the previous commit from it. This could be changes to the file, the author, or commit message, or even a combination of everything.
To do this is lazygit:
Move your head to the commit to amend
Make some changes and stage it
Hit A to amend the current commit with the changes
That's pretty cool, but what if I screw up, and want to look at the commit before I amended it? There are no pointers or labels now pointing at my old commit? That's where reflog comes into play.
Reflog is essentially a history of all the commits your HEAD has touched. It also keeps track of the hashes so that you can easily navigate between commits that might have been lost otherwise.
To use reflog:
Click on the reflog tab
Navigate the the commit you want, hit space to checkout
You can also hit C to cherry pick the commit if you want to apply it to your current branch
Do git reflog.
Find the hash of the commit you want to checkout
git checkout <hash>










Runtime-generated files like log files
Personal configuration files e.g. of your IDE

Props is how react components communicate with each other. Every parent component can pass information to its child components by giving them props.
We've already seen props (short for properties) before, like how we pass in onPress is passed to the Button component, or how the styles is passed to the View component. Just like how functions can take in arguments, components can take in properties.
import React from 'react';
import { View, Text, Button } from 'react-native';
// Child component
const Greeting = (props) => {
return (
<View>
<Text>Hello, {props.name}!</Text>
</View>
);
};
// Parent component
const App = () => {
return (
<View style={styles.container}>
<Greeting name="Alice" />
<Greeting name="Bob" />
<Greeting name="Charlie" />
</View>
);
};You can also pass functions as props!
import React from 'react';
import { View, Button } from 'react-native';
// Child component
const MyButton = (props) => {
return (
<Button title={props.title} onPress={props.onPress} />
);
};
// Parent component
const App = () => {
const handlePress = () => {
alert('Button was pressed!');
};
return (
<View>
<MyButton title="Press Me" onPress={handlePress} />
</View>
);
};In this example, the MyButton component receives a title and an onPress function as props from the parent App component. When the button is pressed, it triggers the handlePress function in the parent component.
Instead of using props, you can destructure the arguments directly in the function signature. This makes it clear which props the component expects and avoids the need to repeatedly write props.
You can also run steps based on certain conditions (using expressions). This is particularly useful when you want to only run a step when certain conditions are met.
The above step only runs when the runner OS is Windows.
Given that virtual machine runners run an OS, you will have access to environment variables from within the job through the env context. To add to the env context, you can use a step:
The above exports a new environment variable START into env . This can be then accessed via ${{ env.START }}.
Suppose that you want to verify that a set of changes are not susceptible to backward compatibility issues in a Node.js environment (version 20), while ensuring that the latest Node.js version is supported as well (version 23).
You can actually use matrix strategies to verify this information by running the same job across different parameters.
So using the above, we are able to then run the same job example_matrix twice with two different node versions: 20 and 23.
The next way is to select elements by class. As mentioned before, elements in the same class would be expected to have similar styling and behaviour, so it makes sense that you would want to select a class of elements instead of by name. An example is as follows:
Here, the styling will be applied to all elements that have the class "long-div". Note that to specify that a class is being selected, you need to prefix the class name with a dot (.)
Lastly, you may want to select an element by its id. This involves using the # symbol as a prefix to the id name:
In this case, the element with the id "useless-button" will be assigned the styling. Generally:
If multiple elements of different classes/tags are being assigned the same styling, it is possible to combine them using commas:
Here, the same styling is being assigned to the <a> tag, the <p> tag and the <div> tag. This reduces the length of the file and makes it easier to control shared styling.
If I wanted to add a few unique rules to the <div> elements while keeping the rest of the styles constant, I could do this:
This is because styles are cascading (hence Cascading StyleSheets). This means that if style rules for an element are defined multiple times, all the properties are combined and applied to the element.
If the same property is redefined multiple times, then the last definition of the property is applied.
When the same property is redefined for a group of elements, the browser decides which definition of the property to use based on this simple algorithm:
Check the selector and choose the most specific selector's property: id selectors are most specifc, and tag name selectors are the least specific
If there are multiple most-specific selectors, then pick the one that appears last and before the element appears (a style rule that appears after the element will not be applied to the element)
Next we'll look at colors in CSS and what values they can take on.
Components are independent and reusable bits of code.
const element = <div>Hello World</div>;import { Text } from 'react-native';
const element = <Text>Hello World</Text>;import { Text } from 'react-native';
const name = "Justin";
const element = <Text>Hello {name}</Text>;Add null check to input parser✔️ - Refactor navbar component for better readability
If you find it difficult to summarise your changes, consider splitting them into multiple commits. Try to group related changes together in commits for easier compartmentalisation.
While you may enter anything in the commit message, we strongly recommend sticking to some convention when creating your commit messages.
A common convention is:
First line: 80-character title, phrased imperatively
Then if your change is complex, elaborate on the change in prose.
Another convention is Conventional Commits:
One bonus of this convention is that branches can be named similarly, e.g. (feat/add-button).
You may also refer to CS2103/T (Software Engineering)'s conventions for naming: https://se-education.org/guides/conventions/git.html
feat: add button
fix: prevent text overflow<button onclick="alert('Hello!')">Say hello</button><link rel="stylesheet" href="filepath/nameOfFile.css"><link rel="stylesheet" href="styles.css">echo "secrets.txt" >> .gitignoreimport React from 'react';
import { View, Button } from 'react-native';
// Child component using destructuring
const MyButton = ({ title, onPress }) => {
return (
<Button title={title} onPress={onPress} />
);
};
// Parent component
const App = () => {
const handlePress = () => {
alert('Button was pressed!');
};
return (
<View style={styles.container}>
<MyButton title="Press Me" onPress={handlePress} />
</View>
);
};- name: Logging
run: |
echo ${{ github.repository }}
echo 'Hello!'- name: Only run on Windows
if: ${{ runner.os == 'windows' }}
run: |
Write-Output "test output"- name: Export environment variables
run: |
echo "START=$(date +'%Y-%m-%dT%H:%M:%S')" >> $GITHUB_ENVjobs:
example_matrix:
strategy:
matrix:
version: [20, 23]
steps:
- uses: actions/setup-node@v4
with:
node-version: ${{ matrix.version }}p {
/* some styling */
}tagName {
/* styling goes here */
}.long-div {
/* some styling */
}.class-name {
/* styling goes here */
}#useless-button {
/* some styling */
}#id-name {
/* styling goes here */
}a, p, div {
/* some styling */
}a, p, div {
/* common styles */
}
div {
/* extra styling only for div */
}import { Text, View } from 'react-native';
const name = "Justin";
const element = (
<View>
<Text>Hello {name}</Text>
<Text>Nice to meet you!</Text>
<View>
);import { Text } from 'react-native';
const TestComponent = () => {
const name = "Justin";
return (
<View>
<Text>Hello {name}</Text>
</View>
);
}Change greeting from "Hi" to "Hello"
"Hi" is a bit too informal for a greeting. We should change it to "Hello" instead,
so that our users don't feel like we are being too informal. Blah blah blah blah.
Blah blah.mainEach workflow is comprised of one or more jobs. Jobs are essentially units of work within the workflow. Jobs can either run in sequential order or in parallel (if one depends on the other). Every job runs within its own runner which is a virtual machine or a container.
You are able to pick between using a virtual machine to execute a job (default) or a Docker container. This guide will not focus on the nuances behind choosing one over the other.
Each job is comprised of several sequential steps that either execute some script defined or an action, which are reusable extensions.
Because steps execute within the same job (and thus the same runner), they can share data between one another through the shared virtual machine/container filesystem. However, because jobs run in different runners, they do not have direct access to the same virtual machine/container filesystem. There are other ways to share data between jobs that we will discuss in one of the common workflows.
The above content is taken from the official Github Actions documentation on the components of Github Actions. https://docs.github.com/en/actions/about-github-actions/understanding-github-actions
To allow you to get a glimpse of what it is like working with Github Actions and setting up various pipelines in Github Actions, we have prepared a simple example React application for you.
The goal of this guide would be for you to add various Github Actions workflows to this example application and extend off of it.
So, throughout a few hands-on activities, you will get the opportunity to build a common CI/CD pipeline to automatically test, lint, and deploy the application.
It is a simple React app created with Vite, built using Typescript, and styled with Tailwind CSS. It is a very simple calculator application that allows you to add/subtract/divide/multiply two numbers — x and y — and display the output:
The calculations are performed using a utility class calculator.ts and there is a unit test suite calculator.test.ts that we have provided as well.
Everything else is not that relevant and you are free to gloss over them if you want.
Fork the repository
Clone the repository
(Optional) Run the repository
You do not need to run the project locally since we will be focusing on writing Github Actions workflows, which will not require running the project locally.
As you go through this section, we will be building on top of this existing project, adding Github Actions and exploring the concepts discussed above.
As you work through this section, you may want to test your Github Action workflows locally before pushing them to Github (to conserve the minutes). You may use the act tool to do so.
The installation instructions for act can be found here: https://nektosact.com/installation/index.html
There are several limitations to using act , such as not having direct access to an actual Github environment, and it does not also simulate/work for every event type. So use it just for understanding the basics of Github Actions.
method attributeThis specifies the HTTP request method to use. By default, this method is "GET", but there are 4 other possibilities: "POST", "DELETE", "PUT" and "PATCH". In our case, Google Search expects a GET request, so the "GET" value is fine. This means we do not need to specify a method this time.
This attribute specifies the URL to submit the form data to. Let's figure out what the action URL should be for our form.
Go to the Google homepage and enter some search query. Then look at the URL bar and see what address it went to.
As we see above, the URL it went to was https://www.google.com/search. Hence this is the URL we need to submit our form data to. Let's add this attribute to our <form> tag:
This attribute is given to an input field, and specifies the name of the input parameter. In this case, what name do we give our input?
Consider the URL bar again. Right after the ?, you should see something along the lines of q=whateveryoutyped.
This is how form data is arranged in a GET request. The name of the parameter, following by an equals sign =, followed by the value of the parameter itself. For multiple parameters, they are separated by ampersand symbols &. In general, in a GET request, the URL bar looks like this:
For Google Search, most of the parameters are for Google's internal service: type of browser, operating system, cookies, google account signed in, etc. But the one we want is the first one: q=
Here q is the name of the parameter, and it stands for (presumably) query. Whatever you searched up would be the value. So now we know what name to give our <input> field:
Your form by now should look like this:
Once you reload the page, you won't notice any visible changes. But when you enter a query into the form field, you'll be redirected to a Google Search page for the query as below.
We're pretty much done with HTML for now. The next section of the guide will explain a little about the Browser Inspector in Firefox, a tool that will be very useful when we move on to CSS right after.
Hosting a website is one of the lowest barrier of entry.
Media-streaming services such as Jellyfin
Game-servers such as Minecraft, Rust and Factorio are common
It’s great for learning!
Self-custody of your own data (this is limited with VPS)
Run servers/services that aren’t available as a SaaS
Get fine-grained control over your service (great for debugging personal projects)
You can learn more here:
This is one way to give output. The second way is to use the alert function. Type in the following in the console:
And hit enter again. Now instead of some output in the console, you should see a popup window with the message "hello world". Clicking "ok" will result in the function evaluation completing, which results in the function returning undefined as before.
Now let's try to take some user input. JavaScript provides the prompt function, which takes in a string to prompt the user with, and returns the user's input. It works like alert: a pop-up appears on the page, except this time the user can also type in some text into an input field.
To declare a variable in JavaScript, you can either use the let keyword or the var keyword. The key difference between the two is that declaring a variable with the let keyword will only allow it to be used within the code block it was declared in, whereas a variable declared with the var keyword will allow it to be used within its parent function block and overrides any previous declaration. This is explained in detail here.
To get started, let's declare two variables, x and y and assign them values of 5 and 10.
Now we can check what their values are by printing them out as before, or by just entering the name of the variable:
As you can guess, // is used to begin a comment. To write comments spanning multiple lines, start a comment with /* and end it with */.
Next, we'll look at some data types and values in JavaScript.
Say your friend is working on the latest cutting edge project and you want to contribute some changes but you are not entirely sure how the codebase works. You can create a fork of your friend's repository, clone it locally, and play around with the codebase.
All of these changes will not be made available to your friend's repository as you have not made it explicit that they should be reflected on their repository.
✅ Forking allows potential contributors to work on changes without directly affecting the original repository
✅ Forks are stored under your account, not your friend's account
✅ You can only make a single fork of a repository
Now that we have established what forking is about, let us now see this workflow in action.
First, create a fork of the demo repository: https://github.com/woojiahao/new-folder/tree/main.
Then, clone the fork to your local machine and navigate to the folder:
Then, make a change to the repository like adding a new file, deleting something from hello.txt, etc.
Once done, commit those changes and push them to your fork.
Then, navigate back to the original repository and create a pull request. This time, you should notice that the compare (target branch) dropdown should include your fork's main branch as well.
Everything else is the same as described in .
Voila! You have now completed the fork and PR workflow. However, there are some caveats!
In such scenarios, it is necessary to add a new remote (see Integrating Remote Repositories for more information about this) pointing to the upstream repository.
To do so, you can run the following line (change the repository link where applicable):
Note that the URL used is the one under "HTTPS" since you do not have SSH access to the demo repository.
Once this upstream remote has been created, we can fetch any changes made to the original repository using the git pull command as such:
This will automatically fetch and merge any changes from the original repository to your local repository. However, if you want to have some more control over the merging process, you can use the following:
The first command retrieves the latest changes (but does not apply them locally). The second command performs a standard merge action, merging the changes from the upstream repository with your current local branch.
You can also create a remote branch on the fork and make a PR from that branch as the source branch instead. Just create a new branch locally for the fork and push it to your fork:
useState hook enables state management in functional components. It accepts an initial value and returns an array containing:The current state value
A function to update that value
Note: useState(initialVal) returns an array, ["val", "func"] destructures the array to access the elements and assign it to a specific name.
When a component's state or props change, React performs a rerender by:
Destroying the current component instance (including all variables and functions)
Recreating it with the updated state values
During this process, React maintains state consistency by providing the latest values to the recreated component. The initial value is only used on the first render.
To add state:
Import useState:
Replace regular variables with state:
Update state using a setter function:
Once you've installed git-filter-repo, run:
git-filter-repo --force --invert-paths --path <path-to-secret>If you actually run the above code to get a property of the element (say, the background-color), you instead get an empty string "" rather than the value you specified in the CSS file. This is because JS, when searching for an element's style, looks at the element's style attribute. This attribute allows you to define inline CSS for an element, but it is not good practice to use it since it mixes HTML and CSS.
However, this doesn't stop us from changing the style as we see fit. Select the button from the console as before, and run the following:
You'll see the button color changes to red.
If you navigate to the Inspector and select the button, you'll also see that it now has a new style attribute that wasn't there before:
You can do the same with any style properties by getting/setting their values to any (valid) values as you like.
To reset the value back to normal, simply assign the empty string:
This removes the inline styling for that property of the button which sets it back to its original style.
Here's how this looks on the page:
Next, we'll look at adding JS to our html pages to make it more interactive.
let button = document.getElementById("hello");
button.style;




Before we start installing and running our services, it is good to keep our system up to date. To update our system, do:
nginx is a great web server for these things, but you can pick whatever you want, or are more familiar with
On Ubuntu, this is pretty simple: apt install nginx
Enable it and start it: sudo systemctl enable nginx, sudo systemctl start nginx
Open up your browser and head to — you should see a page there
Edit files in /var/www/html/..., etc: try replacing index.html? with "hello world"
check again
Here's a pretty cool, simple website. Let's try and deploy it:
One of the best things about a server is that it's meant to run 24/7, something that our laptops or desktops aren't great at doing. This means it's really good at running things constantly at a scheduled interval. To do such a thing, we'll use something known as cron jobs.
Cron is a scheduling daemon that executes tasks at specified intervals.
These tasks are called cron jobs and are mostly used to automate system maintenance or administration.
The crontab command allows you to install, view , or open a crontab file for editing:
crontab -e - Edit crontab file, or create one if it doesn’t already exist.
crontab -l - Display crontab file contents.
crontab -r - Remove your current crontab file.
We have an API that gives us the 24 hour weather forecast in Singapore. We want to:
Check this forecast every morning
If it's about to rain, send us an alert. For simplicity, lets send a telegram message!
Here's a bash script that sends a telegram message if it's going to rain that day:
To get a telegram bot id, just go to @BotFather, and create a new bot, enter the token we receive inside
To get your chat id, just go to @getmyid_bot and copy your chat id there
You'll need to start a chat with your bot first, go to your bot and do '/start' before you try running the script
Here are some more local APIs you can try to automate/script!
These should give you a very simple idea of hosting some services on our own servers. There's so much more we can't cover in a 2 hour workshop but here are some resources if you're fast/want to learn more!
Syncthing is an open-source file synchronization tool that allows you to effortlessly synchronize files across multiple devices. It runs on various platforms, providing a seamless experience for syncing files securely and privately over local networks or the internet.
There are plenty of reasons why you should . We can set up our own to ensure that we can browse the internet securely or hide our IP addresses by routing our network traffic through our servers instead.
HTML, which stands for HyperText Markup Language, is a language used to construct web pages by defining their structure and content. The structure is defined by tags, which are keywords that define elements on a page, and are surrounded by angle brackets (<>).
Tags, as mentioned before, define the structure of a page by creating elements. There are two types of tags: paired and unpaired.
Paired tags have an opening tag, which is the name of element surrounded by angle brackets, and a closing tag, which is the name of the element prefixed with a / and surrounded by angle brackets (<element>...</element>). The content goes in between the opening and closing tags.
Example of paired tags include the <html> tag, the <head> tag and the <body> tag.
Unpaired tags only have a single tag for the element (<element>).
Examples of unpaired tags are the <br> tag and the <hr> tag.
Open up a text/code editor or IDE of your choice, and type in the following code:
Then save the file as index.html. Congratulations, that is your first HTML file ready. To view it in the browser, naviate to the file in your finder/library and open it with a browser of your choice (see below).
In your browser, you should see something like this:
Now lets see what these tags do.
<!DOCTYPE html> essentially declares to the browser that the document is an HTML file. It's not necessary, but helpful.
The <html> tag contains the entire contents of the page between its opening and closing tags.
The <head> tag contains meta information about the page, as well as imported scripts and stylesheets. Here it also contains the <title> tag:
Let's take a look at some more tags:
The <div> tag is a paired tag that is often used to create a generic element that may or may not contain text.
The <a> tag is a paired tag that is used for hyperlinks (a stands for anchor).
The <button> tag is a paired tag that is used to create a clickable button.
We'll end up using all of these tags throughout the guide, so make sure to keep them in mind. A full list of HTML tags can be found .
Next, we'll take a look at tag attributes and how they can be used to give properties to elements.
One of the great features of Zig is that it is fully compatible with C. Let's return to the Hello, world! example, this time using C to print instead of Zig.
const c = @cImport({
@cInclude("stdio.h");
});
pub fn main() !void {
_ = c.printf("Hello, world!\n");
}That's it! Using the cImport directive, we can directly use C code inside of Zig. Of course, please compile this to verify that it works as expected.
Let's write some of our own code instead of using the C standard library. Our custom C program will have just a single greet function, that says hello to the name passed as a parameter.
Let's verify that this works as a C program first. We can create a temporary main file for the C program.
And then compile it using gcc as with any other C program.
It works! So how do we get gcc to work with Zig? Ah! We unveil the secret behind Zig's ability to handle C code so well. It comes with a C compiler :O
Insane! Let's continue by trying using this C code in Zig instead of just C. We can start by modifying our build file to look for greeter.h in the correct place.
Now, we can configure our main.zig file to import our custom greeter.c.
We use the zig build run command instead of zig run, and we can see that everything works well!
On top of commits, we like to label different "lines of work" as branches. By default, your branch will be something like "master" or "main". We can create new branches to group together a bunch of changes and commit.
Notice that all commits come with a unique ID. This can make it difficult to reference the exact commit you are on. This is where HEAD label comes in.
HEAD is a special label given to the current commit you are looking at.
To move you HEAD around in lazygit, go to the Commits submenu (hit 4), then navigate with arrow keys/mouse to the commit you want and hit spacebar to switch to that commit.
Without lazygit, you would do something like:
By default, a new branch is always created from the point of HEAD of your current branch (usually main) onwards. This means that the branch will have all the snapshots that precede (and include) HEAD but any new snapshots made on the branch are not reflected (yet) on main.
Hit 3 to go to the branches submenu
Hit n to create a new branch
Hit space to switch branches
Hit d to delete a branch
Recall that we mentioned that the changes of a branch are not reflected across any other branch UNTIL otherwise specified? How exactly do we specify this?
An easy way to do so is by merging the branches into one another.
Suppose we have two branches: main and feature-A and we want all the changes from feature-A to be present in main so that we can demo it to the executives. We first need to clearly denote which is the source branch (where the changes exist) and the target branch (where we want the changes to appear in). In this scenario, feature-A is the source branch and main is the target branch.
Then, a simple procedure to perform the merge would be:
Switch to the target branch
Merge source branch into target branch
This can be done via:
Hit M while to merge a branch to your current branch
However, this process is not always so straightforward. As you will see in the coming chapter, merging has its own set of "problems" that may arise.
Did you see the !voidreturn type of the mainfunction from earlier? That is Zig's way of indicating that the mainfunction could possibly return an error. Error handling is a feature that was designed carefully in Zig, and worth exploring before we go further.
An error is defined in a similar way to an enum. You can also use ||to combine different error types together. Here are some errors from the standard library.
After defining your error, you have to indicate that a function you've written can possibly return an error. This is done by placing the error name, then a !, then the actual return type.
In the following example, the function myFunctioncan only return MyError. There is no other possible error type. On the other hand, myOtherFunctioncould possibly return other error types — its error type is inferred from the function body.
You can either use tryto propagate errors from functions that you call, or catchto handle the errors at the call site. In Zig, errors must be handled (eventually), otherwise you'll get a compiler error.
Think of it as a tool that allows you to to create stuff with logic and UI. Since CS1101S (or any of the CS1010 variants) focuses mainly on logic, this might be the first time you are dealing with UI.
UI is generally handled with the JSX syntax in the previous section. For React Native, we have some basic out-of-the-box components that we can use.
These components are enough to solve 90% of your needs.
View: A container like div
ScrollView: Like View but Scrollable
Text: Displays texts
Button: Supports touches
TouchableOpacity: Like Button but can encapsulate Button
TextInput: Supports inputting texts
Image: Display images
You can read up on other components .
Components can be organised using . Understanding the 4 concepts below can meet 90% of your needs. Later on we will try to create the mockup for NUS NextBUS using these concepts alone.
row: organise components horizontally
column: organise components vertically
space-between: Spread out components evenly. First component at the left end, last component at the end.
center: Components are centered.
You can style components using the StyleSheet:
Alternatively, you can style components using CSS libraries such as . There are different paradigms to approach styling components but the general idea is the same - they all uses concepts from CSS. You can visit the section in the wiki for more info!
Note: We will use the same HTML page we have been using. I'll refer to it as clicker.html.
In this section, we'll make a simple page that has a button and a counter. Each time the button is clicked, the value of the counter is incremented. We'll use the same page we have been using before.
Set up the html file such its body contains at least the following two items:
a button with an id
a text element of your choice also with an id
An example is below:
You can have other elements and other styling to make it nicer, but that is up to you.
Add an event listener to the script.js file that listens for the DOM content to be loaded, upon which it adds an event listener to the button. Here's how that looks:
updateCounter is the function that we will use to update the counter on the page.
updateCounter functionInside this function we need to do the following:
Get the counter element from the page
Get its text value
Increment it by 1
Set the counter element's inner value to the new value
Try doing this yourself before looking at the code below.
That's all we had to do. Now, test out the code by saving all files, reloading the page, and then clicking the button. The counter value should get incrememented:
Try adding the following features:
When the counter value reaches a non-zero multiple of 10, its font color should change to gold, and set back to original color otherwise
Add a second button that resets the counter variable to 0
Add a third button that decrements the counter, but no longer decreases after reaching 0.
A sample solution can be found .
Next we'll look at how elements can be created or removed in a page.
Download the latest LTS version of Node.js from the official website.
Once installed, open your terminal (macOS/Linux) or Command Prompt (Windows), and verify the installation with the following commands:
$ node --version
v22.14.0
$ npm --version
v11.3.0Your versions might differ slightly — that’s perfectly fine!
Navigate to the folder where you’d like to create your project:
Run the following command to create a new React project with TypeScript:
This will take a few minutes to install dependencies. Once complete, navigate into your project folder:
To start the development server, run:
Your React app should now be running on .
This guide aims to progressively introduce various concepts necessary to start using Git to manage your project.
Before starting the guide, it is highly recommended that you give Setup a read to ensure that everything you need for this guide is correctly setup.
We also highly recommend that you read Fundamental Concepts to understand the terminologies that this guide uses.
The guide is broken up into three parts:
Fundamental concepts
Collaborative workflows
Advanced concepts
The first two parts are the most important and we highly recommend that you, at the very least, finish them. The last part is good to know but not crucial if you are in a rush.
This guide was created as an effort by to make knowledge easily available for various technical topics!
The slides are published here:
11 May 2024: First draft for first 2 sections
18 May 2024: Conducting the workshop for Orbital 2024
17 May 2024: Conducting the workshop for Orbital 2025
These are issues that often relate to your SSH setup.
Please follow the Setup guide correctly.
If you are on Windows, please use Git Bash over Command Prompt or Powershell as it supports Bash which is what is used across this guide.
If you are on MacOS or Linux, feel free to use your default terminal.
Yes. It is how you reference the branch and it is easiest if your remote and local repositories share the same branch name to avoid confusion.
No. Since all your local repository needs to know about the remote repository (see ) is the remote repository URL. Therefore, the names of both local and remote repositories can be different.
This involves setting up password-less authentication. You can refer to this guide:
You can modify the URL for origin via: git remote set-url origin <new url>
You can rename the remote via: git remote rename <old name> <new name>
There is no fixed convention. However, traditionally, main usually represents the production ready state of the repository and is managed through CI/CD. This means that if the feature pushed straight to main has bugs, the production state will also have bugs. This is the biggest reason for using a separate feature branch before merging into main.
However, if the feature you are working on is small enough, then it is possible to merge directly into main.
This involves various steps depending on the state that the repository is in. Please refer to this guide:
As mentioned in Branching, branches can be thought of as independent lines of work that allow you to work on features/bug fixes independently from the main branch. This means that any changes made on a branch is not reflected on the main branch UNTIL otherwise specified.
This is very powerful in allowing you to separate your work from the primary branch for development or just from other people's work.
For simplicity in this section, we will focus on the bare basics of branching in Git.
Notice that all commits come with a unique ID. This can make it difficult to reference the exact commit you are on. This is where HEAD comes in.
HEAD is a special name given to the latest commit of your current branch. It is a nice syntactic sugar to allow you to easily reference the latest commit without knowing its exact commit ID.
By default, a new branch is always created from the point of HEAD of your current branch (usually main) onwards. This means that the branch will have all the snapshots that precede (and include) HEAD but any new snapshots made on the branch are not reflected (yet) on main.
There are two ways to create a new branch:
Then, you can switch to the branch by using:
Alternatively, you can use the git checkout command for both:
As mentioned earlier, you can switch to a branch via:
To view all branches, you can use the following:
To delete a branch, you add the -d flag:
You may have misspelled the branch name or parts of it. You can rectify it using the -m flag:
Recall that we mentioned that the changes of a branch are not reflected across any other branch UNTIL otherwise specified? How exactly do we specify this?
An easy way to do so is by merging the branches into one another.
Suppose we have two branches: main and feature-A and we want all the changes from feature-A to be present in main so that we can demo it to the executives. We first need to clearly denote which is the source branch (where the changes exist) and the target branch (where we want the changes to appear in). In this scenario, feature-A is the source branch and main is the target branch.
Then, a simple procedure to perform the merge would be:
Switch to the target branch
Merge source branch into target branch
This can be done via:
However, this process is not always so straightforward. As you will see in the coming chapter, merging has its own set of "problems" that may arise.
So that was a lot to learn! Now, what can we do with it? We haven't really gone through the useful stuff we can do with scripting, so this section will focus more on some cool things we can do with scripting.
To do this, we'll need to learn commands
curl/wget are two commands that basically do the same thing with slight variations: they both allow us to send and receive data through common networking protocols. For our usecase, the most important thing is that it allows us to talk to an Application Programming Interface (API). Without going into too much detail, APIs are just a magical gateway for us to talk to programs other people have made.
Our final advanced use case we would like to cover is the use of reusable workflows.
Suppose you're in a and have the following sub-projects:
web
admin
Great, now we have Git set up! You might have some questions about the Git Setup process:
So now we want to be able to navigate, operate and monitor our system. To do this, most servers have a relatively homogenous system: systemd!
systemd is a software suite that provides an array of system components for Linux operating systems. The main aim is to unify service configuration and behavior across Linux distributions.
systemctl is the command-line tool that manages the systemd system and service manager in Linux.
Let's create a sum function that takes in a slice of numbers (any type of number!!) and returns the sum of all the numbers! How can we do that?
Woah, what is comptime? And why is the type of T like type itself?
Open up the html page you made . Open it in the browser, and open the browser console. We're going to query the document for elements and change their attributes, then see the changes in real-time.
Our page currently only has two things: a heading and a button. So lets pick one of them. Run the following in the console:
Now, button has the button element assigned to it. Test it by typing button and hitting enter in the console; you should see something like below:
Now that we have started working on our local repository in , we may want others to view the contents of our repositories. We may also want a layer of security against local machine crashes by storing our repositories somewhere else.
This is where remote repositories and Github come into play.
The box model is a way to visualize the space taken up by an element on a page. It involves 4 main properties:
The content: This is content of the element (text, other elements, images, etc.)
The padding: This is the space between the content and each border
The border: This is the, uh, border around the element
To get a good grasp on Git, we will first focus our attentions on working with Git on local repositories.
You can create a local repository from an existing project folder or from an empty folder. For the sake of simplicity, we opted to demonstrate the steps to create a local repository from scratch.
First, create the folder:
Then, navigate to the folder:
Finally, run the following command:
This tells Git that you want this folder to be monitored by Git.
git clone https://github.com/<your Github username>/cicd-calculatoryarn
yarn dev<form action="https://www.google.com/search">...</form>https://www.domain.com/action-url?name1=value1&name2=value2&name3=value3<input name="q" placeholder="query" required><form action="https://www.google.com/search">
<input name="q" placeholder="query" required>
<button type="submit">Submit</button>
</form>console.log("hello world");alert("hello world");let x = prompt("Enter a number:"); // 'let' declares a variable, see below
console.log(x); // prints out the number the user typed inlet x = 5;
let y = 10;x; // will return 5
y; // will return 10git clone [email protected]:<your Github username>/new-folder.git fork-folder/
cd fork-folder/git add .
git commit -m "fork changes"
git push origin maingit remote add upstream https://github.com/woojiahao/new-folder.gitgit pull upstream maingit fetch upstream main
git merge upstream/maingit checkout -b new-fork-branch
git push origin new-fork-branchconst [currentValue, setValue] = useState(initialVal);
// example:
const [color, setColor] = useState(defaultColor);import { useState } from "React";// Before
let counter = 0;
// After
const [counter, setCounter] = useState(0);function increment() {
setCounter(counter + 1);
}
// or you can declare an arrow/anonymous function, which is more modern
const increment = () => setCounter(counter + 1);let element = document.querySelector(query);
element.style.color; // returns the color of the element
element.style.border; // returns the border specification of the element
element.style.backgroundColor; // returns the background colorbutton.style.backgroundColor = "red"; // or any other color value you like<button id="hello" onclick="alert('Hello!')" style="background-color: red;">...</button>button.style.backgroundColor = "";git config --global init.defaultBranch mainsudo apt-get update && sudo apt-get upgrade// in std/mem/Allocator.zig
pub const Error = error{
OutOfMemory,
};
// in std/io.zig
pub const NoEofError = ReadError || error{
EndOfStream,
};
// in std/dynamic_library.zig
const ElfDynLibError = error{
FileTooBig,
NotElfFile,
NotDynamicLibrary,
MissingDynamicLinkingInformation,
ElfStringSectionNotFound,
ElfSymSectionNotFound,
ElfHashTableNotFound,
} || posix.OpenError || posix.MMapError;







Git relies on the core concept of a repository, which is essentially a parent folder that Git is added to to monitor the changes of the folder and its contents (including sub-folders).
These repositories can exist on both your local machines or remotely on an external server (or self-hosted). This guide will look at both instances.
Github is an example of a hosted remote Git server where you can create remote repositories and work on them locally (while pushing changes remotely, hence the "decentralized" nature of Git).
Think of it like having two versions of a Google Docs. When you are editing your document in a train for example, you might lose connectivity, and you'll have an offline copy which is different from the online copy (the source of truth)

The <title> tag defines the title of the page, which is the text that appears on the tab in the browser.
The <body> tag contains the body of the page, including text, forms, images and more.
The <h1> tag defines a heading. It can range from 1 to 6, with 1 being the largest size and 6 being the smallest size heading
The <p> tag defines a paragraph of normal-sized text. Note that it is possible to place the text on its own without a <p> tag and it will still render as a paragraph.
The <form> tag is a paired tag that is used to create an HTML form. We'll see more of these later.
The <hr> tag is an unpaired tag that is used to create a horizontal line on the page
The <br> tag is an unpaired tag used to denote a line break.
The <img> tag is an unpaired tag used to embed images on a page.
The <input> tag is an unpaired tag used to create an input field inside the form.
The <link> tag is an unpaired tag used to import files (usually stylesheets).
The <script> tag is a paired tag that can be used to embed/import JavaScript code into the page.


$ gcc -o greet c-src/greeter.c c-src/main.c
$ ./greet
Hello, Alice!
Hello, Bob!
Hello, world!



$ cd path/to/your/folder$ npx create-react-app my-app --template typescript <your-project-name>
commit hash is f5458f36c67c927fcd41e65e29eb5b8a1491d7a1
author is francisyzy <[email protected]> which was set by the author's git config
date of the commit is Fri May 17 17:14:09 2024 +0800
message of the commit is Update links
The git commit history is also viewable on the Github repository itself.
git revert: Revert a commitLets you 'undo' a commit. It will undo the changes made by that commit and make a new commit to do so.
Do note that this is not the method of removing sensitive information as commit history can still retrieve it.
There is a stackoverflow discussion on how to remove such information and rewrite history. Do note that rewriting history will result in other people's repo being broken.
git reset: Reset HEAD to the commitThis will reset the current HEAD to the commit hash.
Useful reset command that can be used to edit the commit message before its being pushed to the remote is: git reset --soft HEAD~. The command will reset the files to the commit right before the current commit and leave the files in the latest commit in staged. Allowing you to git commit -m "Updated message" to change the git commit message.
git checkout: Checkout files (and also a branch) commitsYou can also use git checkout <commit hash> to go to view the files at a specific commit too.
A useful feature to have when you want to check out the files of that commit before you reset to that commit.
apiAll of them require the exact same CI pipeline of running unit tests and linting that we introduced in Basics of Github Actions. If you copy-pasted the same workflow file three times, it might work, but this means that if one changes, everything needs to change. While some might argue that as the application expands, this flexibility is required to avoid a tight coupling to one type of workflow. However, for the sake of simplicity, let's suppose that this duplication is fundamentally bad for this use case. How do we go about reconciling this?
Well, this is where reusable workflows come in. They allow you to effectively define a "common workflow" that can be shared and reused by other workflows as steps. Essentially, what you're creating is custom actions that have not been properly published.
The official documentation goes into the nitty gritty of the limitations and access of reusable workflows, so we will not cover it in this section. Instead, we will focus on setting up a very rudimentary reusable workflow for the above scenario.
So let's suppose that the original workflow looks like this:
You realize that the job steps are exactly identical, apart from the folder that these commands are being run in. We can generalize these as inputs to the reusable workflow!
Essentially, the key things that had to alter were:
Changing the trigger event type to workflow_call, indicating it's a reusable workflow
Specifying the inputs that the reusable workflow requires, such as the workdir since that is the only thing that changes across variations of this CI workflow
Specifying the cache-dependency-path in the actions/setup-node@v4 action as we need to use the yarn.lock files specific to each sub-project
Specifying the working-directory of each step to point to the given sub-project directory
This is all we really need to create the reusable workflow. Then, we can update our original ci.yml with the following:
In fact, we can even inline every sub-project's CI into the same workflow:
Incredible, we've managed to greatly simplify our CI workflow by using reusable workflows!
I'm sure many of us have heard of macros. They exist in languages like C, Rust or even Lisp (in quite a different form), and they serve as a way of executing some code at compile-time instead of runtime.
We might have also heard of generics in Java and Rust, or even templates in C++ (not sure if I'm committing a sin to lump these together), in order to write code that works across all types with certain constraints.
If you're a user of Go, you might've also used go generate to write repeated code for you.
Well, Zig has a solution that encompasses all three use-cases mentioned above, and that is comptime! What the comptime feature in Zig allows you to do, is simply write Zig code (not any other special language ala C++ template metaprogramming) that is executed at compile-time, instead of runtime!
Looking at our earlier example, our sum function is pure, and can be pre-computed. Let's try to get Zig to precompute the results instead of computing the results at runtime.
Wow! Do you notice what changed? By simply adding the comptime keyword in front of our call to sum, the function was called at compile-time instead of runtime.
Our sum function is already a pretty good example of using comptime to create the effect of generics. Let's go one step further and create a data structure that supports generics.
Notice how we can return structs from functions in Zig! This definitely wouldn't work at runtime, since types don't have a representation at runtime (unless we use runtime reflection). So Zig is actually running the Vector2D function at comptime here, and treating the resulting structs as types to be constructed.
Remember that this is all Zig code, which means we can go one step further and make the length generic as well!
Here, both parameters to Vector play a part in defining what kind of struct the resulting type will be!
Create a new file in the folder and add some text to it.
If you don't want to use bash commands, you can just create the file using your preferred method as well.
Now, run the following command to view the status of your repository:
You should see the following:
Recall that in Fundamental Concepts, Git does not automatically add files to a snapshot as it does not know exactly what you want. So we want to tell Git that we want hello.txt in the snapshot.
You may notice that the git status message states that hello.txt is untracked. Untracked files are those that have never been registered with Git before. They are often new files that have been added to the repository and have not existed in any snapshots.
Files that have been added to a snapshot before are considered "tracked" and Git knows to look out for changes between snapshots.
As discussed in Introducing the commit, a file from the working directory needs to be explicitly added to the staging area for a snapshot to include it. By default, an untracked file that is added to a snapshot becomes tracked for future snapshots.
To add hello.txt to the staging area, use the following command:
Then, use git status to view the status of your repository again:
Notice that now, instead of stating that your file is untracked, Git is indicating that the changes have not committed. This is a sign that the file(s) have been tracked and added to the snapshot.
Now, to take the snapshot (make the commit), you can use the following:
The -m flag is used to specify the commit message. Every commit has an accompanying message that you can use to indicate what the commit contains/entails.
There you have it! You have made a local repository and created a snapshot! We will now look at how we can integrate Github with your local repository!
Using const syntax for JSX
Using function syntax for JSX
Closing tag for components that do not encapsulate anything
Self-closing tag for components that do not encapsulate anything
Nesting components together
Extracting the components into their own components within the same page
Extracting the components into their own components into other files, export them and importing them for use
const logHelloWorld = () => {
console.log("Hello World");
}function logHelloWorld() {
console.log("Hello World");
}document.querySelectorUsage: document.querySelector(query);
This function allows us to select elements based on a selector:
As before, selectors can search for tag names, classes or ids. It is important to note that querySelector returns the first match it finds.
The function takes in a string with the selector as an argument, and returns the element as a Node object. If there is no element that matches the query, it returns null.
For the html page we've opened up, navigate to the Browser console and lets run some queries.
This functions similarly to document.querySelector but this time returns an array of all the elements that match the query. If there are no matches, it returns an empty array.
Usage: document.querySelectorAll(query);
This is a specific way to query the DOM for an element with a given id. It takes in a string with the expected id (omit the # prefix) and return the element if it exists (otherwise it returns null).
Usage: document.getElementById(id);
This returns an array of all the elements that belong to a given class.
Usage: document.getElementsByClassName(className);
This returns an array of all the elements in the document that have a given tag.
Usage: document.getElementsByTagName(tagName);
We'll look at how to access the attributes and other properties of elements in the next section, as well as how we can change them using JS.
<!DOCTYPE html>
<html>
<head>
<title>My web page</title>
</head>
<body>
<h1>Welcome</h1>
<p>This is my first page</p>
</body>
</html>// In file /c-src/greeter.h
void greet(const char *name);
// In file /c-src/greeter.c
#include "greeter.h"
#include <stdio.h>
void greet(const char *name) {
if (name == NULL) {
printf("Hello, world!\n");
} else {
printf("Hello, %s!\n", name);
}
}// In file /c-src/main.c
#include "greeter.h"
#include <stdio.h>
int main() {
greet("Alice");
greet("Bob");
greet(NULL);
return 0;
}$ zig cc -o greet c-src/greeter.c c-src/main.c
$ ./greet
Hello, Alice!
Hello, Bob!
Hello, world!// In file /build.zig
const std = @import("std");
pub fn build(b: *std.Build) void {
const exe = b.addExecutable(.{
.name = "greet",
.root_source_file = b.path("src/main.zig"),
.target = b.standardTargetOptions(.{}),
.optimize = b.standardOptimizeOption(.{}),
});
exe.addIncludePath(b.path("c-src"));
b.installArtifact(exe);
const run_cmd = b.addRunArtifact(exe);
run_cmd.step.dependOn(b.getInstallStep());
const run_step = b.step("run", "Run the app");
run_step.dependOn(&run_cmd.step);
}// In file /src/main.zig
const c = @cImport({
@cInclude("greeter.c");
});
pub fn main() !void {
c.greet("Alice");
c.greet("Bob");
c.greet(null);
}$ zig build run
Hello, Alice!
Hello, Bob!
Hello, world!const MyError = error{
FooReason,
BarReason,
AnotherReason,
};
pub fn myFunction(x: i32) MyError!i32 {
if (x < 43) {
// Simply return the error if you encounter an error condition, instead
// of returning the result.
return MyError.FooReason;
} else if (x > 43) {
return MyError.BarReason;
} else {
return 33;
}
}
// Here the error type is inferred, instead of being explicitly defined.
pub fn myOtherFunction(x: i32) !i32 {
if (x < 99) {
return MyError.AnotherReason;
} else if (x > 99) {
return MyError.BarReason;
} else {
return 22;
}
}pub fn main() !void {
// This will propagate the error up to the caller of this function.
// In the case of the `main` function, it would end the program.
const my_function_result = try myFunction(43);
std.debug.print("The result of myFunction is: {}\n", .{my_function_result});
// Instead of propagating the error, you can also handle it.
const my_other_function_result = myOtherFunction(43) catch |err| {
switch (err) {
MyError.FooReason => std.debug.print("FooReason\n", .{}),
MyError.BarReason => std.debug.print("BarReason\n", .{}),
MyError.AnotherReason => std.debug.print("AnotherReason\n", .{}),
}
// Return early from the main function.
return;
};
std.debug.print("The result of myOtherFunction is: {}\n", .{my_other_function_result});
}import { StyleSheet, Text, View } from 'react-native';
const App = () => (
<View style={styles.container}>
<Text style={styles.title}>React Native</Text>
</View>
);
const styles = StyleSheet.create({
container: {
flex: 1,
backgroundColor: '#eaeaea',
},
title: {
marginTop: 16,
backgroundColor: '#61dafb',
color: '#20232a',
fontSize: 30,
fontWeight: 'bold',
},
});<body>
<p id="counter">0</p>
<button id="trigger">Click me</button>
</body>document.addEventListener("DOMContentLoaded", function() {
let button = document.getElementById("trigger");
button.addEventListener("click", updateCounter);
});parseInt("4324"); // returns 4324function updateCounter() {
// query for the element
let counterElement = document.getElementById("counter");
// get the counter value
let count = parseInt(counterElement.innerText);
// increment the counter value
count++;
// set the counter value
counterElement.innerText = count;
}$ cd <your-project-name>$ npm run startgit branch <branch name>git checkout <branch name>git checkout -b <branch name>git checkout <branch name>git branch -vgit branch -d <branch name>git branch -m <new branch name>git checkout main
git merge feature-Acommit 4bdffb07c5abd0d41388c991dc03661be07fe6d0 (HEAD -> main, origin/main, origin/HEAD)
Merge: 269f9f4 11690c2
Author: Jiahao <[email protected]>
Date: Fri May 17 21:46:40 2024 +0800
Merge pull request #7 from francisyzy/main
Update links
commit 11690c25ce187263ff77725647475fee6d5e1faa
Author: francisyzy <[email protected]>
Date: Fri May 17 21:37:24 2024 +0800
Add feedback QR code
commit f5458f36c67c927fcd41e65e29eb5b8a1491d7a1
Author: francisyzy <[email protected]>
Date: Fri May 17 17:14:09 2024 +0800
Update links
commit 269f9f48ddb550da9223d03123279910dd8c5336
Merge: a1fc9bf 01cdd59
Author: Francis Yeo <[email protected]>
Date: Fri May 17 12:15:50 2024 +0800
Merge pull request #3 from woojiahao/maincommit f5458f36c67c927fcd41e65e29eb5b8a1491d7a1
Author: francisyzy <[email protected]>
Date: Fri May 17 17:14:09 2024 +0800
Update links# .github/workflows/web_ci.yml
name: CI/CD Pipeline
on: [pull_request, workflow_dispatch]
jobs:
linting:
runs-on: ubuntu-latest
steps:
- name: Fetch repository
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'yarn'
- name: Install dependencies
run: |
yarn
- name: Lint code
run: |
yarn lint
unit-tests:
runs-on: ubuntu-latest
steps:
- name: Fetch repository
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'yarn'
- name: Install dependencies
run: |
yarn
- name: Run unit tests
run: |
NODE_ENV=production yarn test# .github/workflows/reusable-ci.yml
name: Reusable CI Workflow
on:
workflow_call:
inputs:
workdir:
description: 'Working directory'
required: true
type: string
jobs:
linting:
runs-on: ubuntu-latest
steps:
- name: Fetch repository
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ inputs.node-version }}
cache: 'yarn'
cache-dependency-path: "${{ inputs.workdir }}/yarn.lock"
- name: Install dependencies
working-directory: ${{ inputs.workdir }}
run: |
yarn
- name: Lint code
working-directory: ${{ inputs.workdir }}
run: |
yarn lint
unit-tests:
runs-on: ubuntu-latest
steps:
- name: Fetch repository
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ inputs.node-version }}
cache: 'yarn'
cache-dependency-path: "${{ inputs.workdir }}/yarn.lock"
- name: Install dependencies
working-directory: ${{ inputs.workdir }}
run: |
yarn
- name: Run unit tests
working-directory: ${{ inputs.workdir }}
run: |
NODE_ENV=production yarn test# .github/workflows/web_ci.yml
name: CI/CD Pipeline
on: [pull_request, workflow_dispatch]
jobs:
ci:
uses: <org name/username>/<repo name>/.github/workflows/reusable-ci.yml@main
with:
workdir: web# .github/workflows/ci.yml
name: CI/CD Pipeline
on: [pull_request, workflow_dispatch]
jobs:
web_ci:
uses: <org name/username>/<repo name>/.github/workflows/reusable-ci.yml@main
with:
workdir: web
admin_ci:
uses: <org name/username>/<repo name>/.github/workflows/reusable-ci.yml@main
with:
workdir: admin
api_ci:
uses: <org name/username>/<repo name>/.github/workflows/reusable-ci.yml@main
with:
workdir: apiconst std = @import("std");
fn sum(comptime T: type, values: []const T) T {
var result: T = 0;
for (values) |value| {
result += value;
}
return result;
}
pub fn main() void {
const some_i32s = [_]i32{ 1, 2, 3, 4, 5 };
std.debug.print("sum of i32s: {}\n", .{sum(i32, &some_i32s)});
const some_f32s = [_]f32{ 1.0, 2.0, 3.0, 4.0, 5.0 };
std.debug.print("sum of f32s: {}\n", .{sum(f32, &some_f32s)});
const some_u64s = [_]u64{ 1, 2, 3, 4, 5 };
std.debug.print("sum of u64s: {}\n", .{sum(u64, &some_u64s)});
}const std = @import("std");
fn sum(comptime T: type, values: []const T) T {
var result: T = 0;
for (values) |value| {
result += value;
}
return result;
}
pub fn main() void {
const some_i32s = [_]i32{ 1, 2, 3, 4, 5 };
std.debug.print("sum of i32s: {}\n", .{comptime sum(i32, &some_i32s)});
const some_f32s = [_]f32{ 1.0, 2.0, 3.0, 4.0, 5.0 };
std.debug.print("sum of f32s: {}\n", .{comptime sum(f32, &some_f32s)});
const some_u64s = [_]u64{ 1, 2, 3, 4, 5 };
std.debug.print("sum of u64s: {}\n", .{comptime sum(u64, &some_u64s)});
}const std = @import("std");
fn Vector2D(comptime T: type) type {
return struct {
x: T,
y: T,
};
}
pub fn main() void {
const vector_of_i32s = Vector2D(i32){ .x = 1, .y = 2 };
const vector_of_f32s = Vector2D(f32){ .x = 1.0, .y = 2.0 };
std.debug.print("vector_of_i32s: {any}\n", .{vector_of_i32s});
std.debug.print("vector_of_f32s: {any}\n", .{vector_of_f32s});
}const std = @import("std");
fn Vector(comptime T: type, comptime len: usize) type {
return struct {
values: [len]T,
};
}
const Vector_i32_2D = Vector(i32, 2);
const Vector_f32_4D = Vector(f32, 4);
pub fn main() void {
const vector_of_2_i32s = Vector_i32_2D{ .values = [_]i32{ 1, 2 } };
const vector_of_4_f32s = Vector_f32_4D{ .values = [_]f32{ 1.0, 2.0, 3.0, 4.0 } };
std.debug.print("vector_of_2_i32s: {any}\n", .{vector_of_2_i32s});
std.debug.print("vector_of_4_f32s: {any}\n", .{vector_of_4_f32s});
}mkdir new-folder/cd new-folder/git initecho 'Hello world' >> hello.txtgit statusOn branch main
No commits yet
Untracked files:
(use "git add <file>..." to include in what will be committed)
hello.txt
nothing added to commit but untracked files present (use "git add" to track)git add hello.txtOn branch main
No commits yet
Changes to be committed:
(use "git rm --cached <file>..." to unstage)
new file: hello.txtgit commit -m "First commit"const App = () => {
return (
<View>
<Text>This is an app</Text>
</View>
);
}function App() {
return (
<View>
<Text>This is an app</Text>
</View>
);
}function App() {
return (
<Button title="Press Me"></Button>
);
}function App() {
return (
<Button title="Press Me" />
);
}const MainComponent = () => {
return (
<View>
<View>
<Text>Subcomponent one</Text>
</View>
<View>
<Text>Subcomponent two</Text>
</View>
</View>
);
}const MainComponent = () => {
return (
<View>
<SubComponentOne />
<SubComponentTwo />
</View>
);
}
const SubComponentOne = () => {
return (
<View>
<Text>Subcomponent one</Text>
</View>
);
}
const SubComponentTwo = () => {
return (
<View>
<Text>Subcomponent two</Text>
</View>
);
}// index.jsx
import SubComponentOne from "./component-one.jsx"
import SubComponentTwo from "./component-two.jsx"
const MainComponent = () => {
return (
<View>
<SubComponentOne />
<SubComponentTwo />
</View>
);
}
// component-one.jsx
const SubComponentOne = () => {
return (
<View>
<Text>Subcomponent one</Text>
</View>
);
}
export default SubComponentOne;
// component-two.jsx
const SubComponentTwo = () => {
return (
<View>
<Text>Subcomponent two</Text>
</View>
);
}
export default SubComponentTwo;document.querySelector("button"); // searches for the first <button> tag
document.querySelector(".vintage"); // searches for the first tag with the "vintage" class
document.querySelector("#nexus"); // searches for the first tag with the id "nexus"crontab -i - Remove your current crontab file with a prompt before removal.crontab -u - Edit other user crontab file. This option requires system administrator privileges.
Hit R to rename a branch
There are two ways to create a new branch:
Then, you can switch to the branch by using:
Alternatively, you can use the git checkout command for both:
As mentioned earlier, you can switch to a branch via:
To view all branches, you can use the following:
To delete a branch, you add the -d flag:
You may have misspelled the branch name or parts of it. You can rectify it using the -m flag:


JSON is a common format for receiving and transporting data you will eventually have to work with, and it can sometimes be a pain to work with. Luckily, we have a nice program that can help us:
Here's a really nice API:
It gives us the bus arrival timings at public bus stops! Let's say I want to write a script that gives me the time till the next bus, let's say bus 95:
Cool, but we don't really want to have to find the directory where our script is every time we want to run it! We can put it in special folders that allow us to run it from anywhere. To find this directory, we can do:
This checks the environment variable PATH, which what the shell looks at when looking for your normal commands like ls and cd. We can then add our program to the PATH locations, most likely something like /usr/local/bin.
Now we can just run the bus command from anywhere like a normal shell command!
Let's do something else that way more complex! We have an API that gives us the 24 hour weather forecast in Singapore. We want to:
Check this forecast every morning
If it's about to rain, send us an alert. For simplicity, lets send a telegram message!
Here's the rough script we want to send:
To get a telegram bot id, just go to @BotFather, and create a new bot, enter the token we receive inside
To get your chat id, just go to @getmyid_bot and copy your chat id there
You'll need to start a chat with your bot first, go to your bot and do '/start' before you try running the script
Great! We have a working weather alert! But we don't really want to have to manually run this every morning, it would defeat the purpose of the script! Assuming your laptop runs 24/7, we can make this script run on a schedule using cronjobs!
Cron is a scheduling daemon that executes tasks at specified intervals.
These tasks are called cron jobs and are mostly used to automate system maintenance or administration.
Essentially, we can schedule jobs on our machine to run at a specific time by putting in some magic string + the command we want to run. So what is this magic string?
Without going into too much detail about how cronjobs work, we can grab our magic string from the site below
and pick out the crontab we want, in this case:
So our expression should follow the format:
In our case:
It is important we use absolute paths here.
And that's it! As long as our laptop or machine is running at 9am in the morning, this script should run automagically!
To put it simply, it is a good way for Github to "authenticate" you. You wouldn't want unauthorized people trying to change your repositories.
The traditional way is to use the commands in Command Glossary to add files to the staged area, then using git commit. Let's try using lazygit to speed up this workflow.
To start, let's first initialize a repository somewhere.
Make a new file, recipe.txt and modifying it a little.
Now, run lazygit
Hit 2 to go to the files submenu.
Hit a to stage all commits (this is the same as git commit -A)
Hit spacebar to stage commits by individual files
Hit Enter to enter into a file and use spacebar to select line by line which files to stage (This is known as interactive staging)
Once you've selected what you want to commit, press c, and enter a message, then hit Enter to commit
See: Ignoring Files
By default, Git does not know what files it should be including in a snapshot (and this is a good thing because we don't want Git to just add every file as they may contain sensitive information).
This is where the "three areas" concept comes into play. It is often good to think of your projects with Git as three separate concepts:
Working directory: where your codebase actually resides
How to work with systemd units, etc
Many modern Linux distros go with systemd — it can handle services for you in a convenient manner
It comes with systemctl, along with a lot of other things — DNS caching resolvers, time sync, a bootloader, etc, But we'll want to focus on systemctl
Start/stop/restart services: systemctl start unit.service, etc
Check service status: systemctl status unit.service
journalctl is a utility for querying and displaying logs from journald, systemd’s logging service.
To see live logs:
To see the first 20 lines
To check a specific service:
To get all logs from last boot
To filter by time (last 15 minutes for example)
Aside from reading warnings, and making sure our services are running, we often times also want to make sure our system as a whole is running as expected. This is where top and htop come into play
top command is used to show the Linux processes. It provides a dynamic real-time view of the running system.
Think of it as a super powerful task manager for Linux.
Just type top to start the program.
Pressing q will simply exit the command mode.
Pressing h will show you the help menu.
PID: Shows task’s unique process id.
PR: The process’s priority. The lower the number, the higher the priority.
VIRT: Total virtual memory used by the task.
USER: User name of owner of task.
%CPU: Represents the CPU usage.
TIME+: CPU Time, the same as ‘TIME’, but reflecting more granularity through hundredths of a second.
SHR: Represents the Shared Memory size (kb) used by a task.
NI: Represents a Nice Value of task. A Negative nice value implies higher priority, and positive Nice value means lower priority.
%MEM: Shows the Memory usage of task.
RES: How much physical RAM the process is using, measured in kilobytes.
COMMAND: The name of the command that started the process.
Try running top and see what you can find out about your system!
What are the top 5 processes using the most CPU?
What would I press if I want to kill the processes using the most memory?
I want to see what processes start running when I start my computer. How would I do that?
To get an element's properties or attributes, we can use dot notation:
The above line returns the value of the onclick attribute, which is a function. To make things more interesing, let's go to the html file and add an id attribute:
Reload the page, reopen the console and rerun the above lines to select the button (this time try using the id as a selector instead).
Then run:
This should return "hello", since that is the id we assigned the button.
Given that you have a variable element that has been assigned an element after querying the DOM, you can access any of its attributes using dot notation:
If the element has not been assigned a given attribute it returns an empty string, or if the attribute is invalid it returns undefined
Example: running the following should return "" since the button has no class:
Some attributes have default values, such as the hidden attribute which is false by default:
Just as an element's attributes can be accessed by JS code, they can also be changed by reassigning them values. Let's try this on our button.
Run the following (after querying for the button if needed):
You won't see any visual change on the page (since the class doesn't affect its appearance) but if you navigate to the Browser Inspector and look for the button, you'll see that it has been assigned a class attribute. You can even remove the attribute if you wish:
To visualise the change in a better way, try these steps:
Go to your CSS file and add some rules for a class "random-class" (or whatever you want to call it)
Refresh the page, reopen the console, and query for the button
Assign the button a class attribute as above, with the class being "random-class" (or whatever you called it)
You'll see the CSS being applied to the button
Reset the class attribute back to blank. You'll see that the styling no longer applies.
To hide the button (or any element really) just set its hidden attribute to true:
You'll see the button disappear from the page. To make it reappear, set the hidden attribute back to false:
Any element's (valid) attributes can be assigned or reassigned to any (valid) value by accessing them using dot notation after querying for the element. If an invalid attribute or invalid value is assigned, then it is simply ignored by the page.
If there is no match for the element query, null is returned by the querySelector and getElementById functions. this means that trying to get/reassign any attributes will cause an Uncaught TypeError: x is null.
There are two ways to get the content of an element. One is to use to .innerHTML property:
And the other is to use the .innerText property
The difference between the two is that innerHTML refers to everything inside the element, whereas innerText refers to just the text inside the element. In our example there is no difference, but let's try editing our html file temporarily:
Now we have a <span> element inside the button; a <span> is just an easy way of encapsulating part of a line so specific styling can be added to it.
Now try getting the contents:
You can change the contents as well by reassigning the properties.
Next, we'll look at how you can modify the style of an element using JS.
You can visit the following link: https://github.com/new or simply use the Github UI to create a new repository:
Then, you should see the following page:
Some key details to take note of:
Repository template: creates a project with some basic files already included (we won't use this)
Repository name: any name you want to give your project, you can follow the same name as your local folder (please fill this up)
Description: optional description about your project (optional)
Visibility: public (publicly accessible and viewable) or private (only viewable to yourself and any collaborators) (set it to public for now)
README: file that normally contains basic documentation about the project (do not create one)
.gitignore: tells Git to ignore certain files (such as private files) (do not create one)
LICENSE: specifies an open-source license (if any) (do not create one)
We do not create a README or .gitignore or LICENSE as we have existing files locally and we don't want to cause any problems when trying to move the local repository to Github.
Once done, you should see the following:
Now, let's get your local repository to Github through the remote repository you just created.
The local repository needs to know which remote repository it is going to be connected to. This is where the git remote command comes in handy.
It is used to manage remote repositories for your local repository.
In this instance, we want to add the remote repository you created to the local repository under the name origin (in fact, we can name it anything, origin is just the convention):
Now that you have added the remote repository link to the local repository, you then need to retrieve the main branch from the remote repository (recall that the main branch is the default branch).
This is done using
To upload all local snapshots to the remote repository, you will use git push.
Once done, you can navigate back to the Github repository and refresh the page. You will notice the following:
If you open hello.txt, you'll also see that it has Hello World in it.
What you have just accomplished is the "push" workflow where you push an existing local repository to Github. However, there may be times where you want to create a remote repository first and use it locally. This can be done via the "pull" workflow or "clone" workflow.
We will be working off of the same Github repository. Suppose that you want to create a copy of the remote repository on your local machine. You can do so with the following.
Ensure that you navigate out of the current folder:
Then, use the git clone command:
Then, you can navigate into the folder and view hello.txt:
Regardless of the method you have used, you have now integrated a local repository with a remote one! Let's move on to the final two fundamental concepts: branching and merge conflicts.
The margin: This is the space outside the border of the element separating it from other elements
The padding, border and margin can have individual properties on each side (top, left, bottom and right).
The Browser Inspector allows you to visualise the Box Model of a selected element on the page. An example is below:
We can see the "Google Search" button element selected, and its box model shows its properties. They can be listed as below:
The button's content is ≈94 pixels wide and 34 pixels high
The button has a 1 pixel thick border all around
The button has no padding on its top and bottom but has 16 pixels of padding on its left and right
The button has a 11 pixel margin above and below it, and a 4 pixel margin to its left and right
In CSS, border, margin and padding are all properties that can be assigned to any element and given a valid measurement value. There are additional directional properties for each of these to give specific measurements for a specific side.
Below you can see how each of these are used in CSS.
Property usage: border: type thickness color;
type can take on one of the following values: solid, none, dashed, dotted, double, groove, ridge, inset, outset or hidden
thickness can take on any valid measurement as we saw in the previous section
color can take on any
To style different sides differently, you can use one of the following properties:
border-top - the top border
border-bottom - the bottom border
border-right - the right border
border-left - you can figure this one out
The type, thickness and color can also be separated out by using different properties:
border-style - the type of border
border-width - thickness of the border
border-color - color of the border
These can again be separated into the directions:
border-top-style, border-bottom-style, border-right-style, border-left-style
border-top-width, border-bottom-width, border-right-width, border-left-width
border-top-color, border-bottom-color, border-right-color, border-left-color
Property usage 1: padding: value;
This assigns the same padding on all 4 sides.
Property usage 2: padding: top right bottom left;
This takes 4 measurements, separated by spaces, and assigns them in clockwise order.
Individual sides can get different paddings in by using either padding-top, padding-bottom, padding-right or padding-left.
Property usage 1: margin: value;
This assigns the same margin on all 4 sides.
Property usage 2: margin: top right bottom left;
This takes 4 measurements, separated by spaces, and assigns them in clockwise order.
Individual sides can get different margins in by using either margin-top, margin-bottom, margin-right or margin-left.
Next, we'll finally begin to add some CSS to our html document using all the knowledge from these past 4 articles.


Memory management in Zig is handled manually, similar to C and C++. By default, variables are allocated on the stack. However, should we need some dynamic memory, we can use allocators to allocate memory on the heap for use.
Allocation is handled by the std.mem.Allocatorstruct, which defines methods to allocate and free memory based on some underlying allocation strategy. The Zig standard library provides several different allocators with different strategies.
Let's first go through a simple example using the general-purpose allocator.
The general purpose allocator in Zig can be used for most purposes. Here we use the allocfunction of the allocator to allocate memory for 16 u8s (16 bytes of memory).
Here's a brief digression to introduce the deferkeyword. This keyword can be used to execute an expression at the end of the current scope. If there are multiple defers in the same scope, they wil be executed in the reverse order from which they were introduced.
The following statements will produce the following console output.
The example we gave earlier wasn't complete. We didn't actually free the memory we allocated. Should the program have been more long-running (e.g., web server), we would have leaked memory. Luckily, the general-purpose allocator comes with a built-in way to check for leaks, which composes nicely with the defer keyword that we just learnt.
Now, the program should crash indicating where the memory leak took place. To fix this, we can use defer once more to free the memory at the end of the scope.
This allocator takes in a slice of bytes and performs allocations on it. The example should make this clear.
The arena allocator wraps an existing allocator, using it the perform allocations. However, it doesn't perform any frees, instead free-ing all the memory it allocated at once upon deinit.
This is the most basic allocator. When you make an allocation, it will ask the OS for an entire page of memory, which makes this extremely space inefficient, and also not performant.
Let's explore memory management further by looking at a common data structure used in Zig programs: the humble ArrayList. This is Zig's implementation of a dynamically-sized array in the standard library.
Notice how we pass allocator into the constructor of the ArrayList. It will store the allocator and use it whenever it needs to allocate memory internally. This is quite different from C, where the allocator is assumed to be a global construct (e.g., mallocand free). In this way, and since std.mem.Allocator represents any allocator, we can separate the concerns of how to allocate memory from how to implement a dynamic list.
However, storing the allocator means that the ArrayListstruct takes up more space. There is another version called ArrayListUnmanagedthat doesn't require passing the allocator in the constructor. Instead, you pass the allocator each time you need to allocate memory.
And this highlight a common choice in Zig, whether to store the allocator or pass it in each memory-allocating operation. There isn't a correct answer, and really depends on the use-case. Hash maps in Zig also follow a similar principle, with the standard library providing both managed and unmanaged versions.
You might also want to interact with the Github API during your workflows. Some of the common use cases we've noticed include:
Fetching information about the repository/pull request/user
Creating issues
Creating issue/pull request comments
Updating issues/pull requests
Retrieving information about another repository
Automatically running jobs and creating commit
You can use the Github API via Github script, an action that allows you to write Javascript scripts using the Github API — actions/github-script@v7.
The README.md of the action contains a lot of examples of use cases with the Github script. However, we will just cover a very simple script to illustrate a few points:
In the above example, we are using Github script to create a new comment on a newly created issue. We can see that the event that triggers this workflow is the issues event, when one is opened.
The various API calls are based on the Octokit documentation: where you replace octokit with github !
You might be wondering, "How does the Github script have access to the Github API when some APIs require an API token?"
Amazing question! This is where the GITHUB_TOKEN we talked about in come into play. By default, Github script uses the GITHUB_TOKEN to access these APIs, which means that it is restricted to only accessing the current repository. Therefore, if you wish to access other repositories or data that does not belong to the current repository and requires authentication, you will need to:
Create a Personal Access Token:
Add it to the repository's secrets:
Set the github-token input for the action: see below
Another thing you can use Github script for is combining it with to automatically perform some actions to the current repository at set intervals. An example of this might be to poll for new information across multiple repositories and updating a set of files on the current repository as commits:
This poller runs every day at midnight UTC, fetching all repositories that satisfy some query string and updating the README of the current repository as a commit.
Continuing from the previous section, let's add some more styles to our page.
The buttons look rather plain for now, but that won't be true for long. Let's change some of the properties.
So what's happened here? Let's take a look:
background-color: black; - this just changes the background-color of the button to black
color: white - this changes the color of the text in the button to white
border: solid rgb(160, 78, 146) 3px - this gives the button a solid 3-pixel-thick border that is colored in with a shade of purple.
Let's change the styling of the "useless-button" a little. This button, as you may recall, does nothing (hence its id). Recall also that we gave it the id "useless-button" so we can select the element by its id and assign some styling to it. I'm just going to add 1 change, but feel free to add as many as you'd like.
Here, I have reduced the curvature of the border to 0, i.e. the corners are no longer curved.
Try adding some styling to the form and/or the input field, and maybe to the other elements in other ways.
You may have noticed some sites have buttons or other elements that seem to change their appearance/style when you, for example, hover over them, or click them. This is achieved by pseudoelements and pseudoclasses, which are essentially used as styling rules for elements and are applied when an action is done on the element.
In general, styling can be added for the pseudoclasses/pseudoelements as follows:
A full list of these pseudoelements/classes can be found but for now let's use one pseudoclass in particular, the :hover pseudoclass.
This pseudoclass allows us to define styling that will be applied when the user hovers over an element. Let's apply it to our button:
Nothing much going on here except two things:
font-weight: bold; - this makes the text bold
border-radius: 5px; - this changes the border radius to 5px
Reloading the page shows no obvious changes. But when you hover your cursor over the element, you'll see the changes come into effect:
There are many more possible pseudoelements and pseudoclasses that can help make a complex webpage more interesting and dynamic, but for now let's leave it at this. Feel free to look through the resource above to see a list of pseudoelements; assigning style to them is the same as assigning style to any regular element.
Now that we're done adding some style to the document, this is the end result:
The CSS file should look like this:
Our page now has some structure, thanks to HTML, and a little style thanks to CSS. In the next section, we'll look at adding JavaScript to the page to add some interactivity.
With your new workflow ci.yml, you are able to run unit tests. But another key operation in most CI/CD workflows is linting the project, ensuring that the code follows a certain standard and set of conventions.
Using Github Actions, we want a pull request to fail if the branch contains poorly linted code.
As described in the previous section, we think of answering two key questions when constructing the workflow:
"When will it run?" — established to be during pull_request (and additionally workflow_dispatch for testing)
"What will it do?" — execute the yarn lint script given in the package.json
However, there is an additional question we will want to answer, given that we already have an existing workflow:
"Is this going to be a separate workflow? A separate job in the same workflow? Or just another step in the existing job?"
There is no right or wrong answer for the above. But it is worth considering the following factors:
Is this a part of the CI workflow? — yes, so we might not want to separate it out
Is the task a part of unit testing? — no, so we might want to split it out to avoid cluttering a single job
So in this case, we choose to create a separate job within the same workflow ci.yml. By default, jobs will run in parallel, but can be designed to run sequentially. So, we get the added benefit of having both linting and unit tests running in parallel, saving time (arguable since we need to reinstall the project dependencies in each job, but as jobs get more complex, running them in parallel will allow simpler ones to complete first), and preventing the results of one job from affecting the other (one can fail while the others pass).
Before we dive into the code required, maybe take some time to think about and attempt to implement the above job! It is not very different from the previous implementation!
You will notice that every step except the last is the same as the unit-tests job. That is because the initial setup of the virtual machine runner does not change! We still need to
Fetch the repository
Setup Node.js
Install project dependencies
And this is all because all jobs run in separate virtual machine runners! So linting does not share these steps with unit-tests .
The only step that differs between linting and unit-tests is the linting step, which we rely on the provided lint script in package.json, which calls eslint ..
Now, the single workflow has evolved to include two parallel jobs!
As mentioned at the start of this section, we will be verifying that the linting works by using the workflow_dispatch event. So, once again, push the latest changes to ci.yml to your fork and manually run the workflow:
This time, you will see that there are now two separate jobs running within the same workflow:
Both of them will also complete at around the same time since both linting and unit tests are relatively small at this time:
Try playing around with this new job. Create a branch and purposely commit and PR poorly linted code and see what happens! The linting job should fail while the unit-tests job will continue to work.
Amazing! We have not only setup a CI/CD workflow that runs unit tests, but also linting, and both run in parallel and don't affect each other's outcomes!
Let's tackle the the final piece of the puzzle: deploying the application to Github Pages!
We've so far seen how to query and update the DOM, but JS allows us to create and delete elements as well.
To create an element, simply use the document.createElement method, passing in the tag name:
The above line creates a new <a> element.
When a new element is made, it is a raw element with no attributes. However, we can use the same methods as earlier to set attribute values for the element:
The second way is to use the setAttribute method:
Both methods have the same result, but it is convention to use the second method when new attributes are being added to an element.
An element's styling can be changed the same way as before, using the style attribute.
To add some text inside a newly created method, you can use one of two methods. The first involves the innerText property:
The second way is to create a special element, called a Text Node, and add that to the element:
Once the element has been created, it is time to add it to the page so it renders. There are a number of ways to do this, depending on where on the page the element has to go.
If the new element has to be inserted before an existing one, then the insertBefore method is useful:
If the new element has to be inserted before an existing one inside some parent element that is not the body, the following code will work:
The above code will turn this:
Into this:
To insert an element at the end of a parent element, using the appendChild method:
The above code will turn this:
Into this:
Let's say we select an element that we want to remove from a page:
First, we need its parent element:
Then we can use the removeChild method:
To replace one element with another, use the replaceChild method:
Next, we'll look at fetch requests and how they can retrieve information without having to reload the page. We'll do so with an example of the NUSMods API.
Merge conflicts occurs when two (or more) modifications (made by yourself or others) are made to the same line of a file. This causes a state of confusion as Git is unsure which change should be applied. So rather than making a decision for you, it "errors" and lets you decide.
To make this more concrete, let's fabricate a merge conflict.
Firstly, create two new branches from main:
Then, on branch-A, edit the line Hello World by changing it (e.g. Hello). Remember to commit the changes!
Then, on branch-B, perform another edit on the same line but to a different text (e.g. Hello Universe)
Finally, switch back to main and merge both branch-A and branch-B together:
Merging branch-A should not have any issues but merging branch-B will have the following error:
This means that Git is not entirely sure if it should keep the changes from branch-A or the changes from branch-B so it lets you decide.
You can use git status to view the state of the repository now that it has a merge conflict:
Notice that it indicates that hello.txt is unmerged. Only files with merge conflicts are left in this area. Any other files without a merge conflict will be merged as per usual.
To resolve the merge conflict, open up hello.txt in your favorite text editor.
You will notice the following (or something similar depending on your changes):
Git displays both changes (delineated by the ======) and all you need to do is to edit the area to remove the <<<<<<< HEAD, =======, and >>>>>>> branch-B. You may choose to delete the content in the top half, bottom half, both halves, or keep both.
Notice that the top half is the current content in the branch and the bottom half is the content that is about to be merged.
For the sake of simplicity, let's keep both changing by only removing the new lines:
Then, save the file and close the file. Then, add the file to the staging area:
Now, you can perform git status once more to view the status:
The issue has been resolved and so Git no longer has a dedicated section for the unmerged files.
Finally, complete the merge by using git commit. You may notice that the commit message is already provided for you so you can just accept it as it is.
And there you have it, we have successfully caused and resolved a merge conflict! Now, onto the collaborative workflows that Git allows.
Suppose we want to build an an app with the following requirements:
A Text component displaying the counter value, with an initial value of 0.
A Button to increment the counter.
Another Button to decrement the counter.
Without knowledge on React state management, our knowledge of javascript would tell us to do something like this:
React Native doesn’t know when to re-render a component if we simply use let counter = 0 and update it. This breaks the rules for props since it involves mutating the value directly.
Instead, React provides the useState hook to manage state changes and trigger re-renders appropriately.
State is used in various components, such as TextInput, where the text changes every time the user types something. We use useState to track and update the text.
Conditional rendering allows components to render dynamically based on the state. Here’s an example with a modal:
In this example, the toggleModal function toggles the isModalOpen state, which conditionally renders the ModalComponent.
Now that we have understood the basic datatypes and operations of JavaScript, we can start learning about other constructs. This section will cover loops and conditions.
Conditionals are a set of statements that execute different blocks of code based on the evaluation of one or more boolean expressions. In simpler words, if something is true then one thing happens otherwise something else happens.
The syntax for conditional statements in JavaScript is as follows:
Example:
This is an operation that operates on 3 operands, specifically to allow the evaluation of a conditional statement in a single line. Like any other operation, ternary conditions can be nested. The syntax is as follows:
Example:
A loop is a block of code that gets executed repeatedly until an end condition is satisfied. There are two types of loops in JavaScript, the for loop and the while loop. The syntax for each is as follows:
An example of a for loop is as follows:
The loop above will print out numbers from 0 to 10 one by one. As you can see, first the counter is declared (let i = 0), then the limit is defined (loop keeps executing as long as i < 11), and then the step is defined (i is incremented by 1 each time; i++). The eventual output will look like this:
An while loop that does the same thing as the for loop above would be as below:
Note that here, the counter variable declaration needs to be done outside the loop, and the step is defined inside the loop. This is because the while loop only has a boolean expression in its keyword call. Omitting to declare the counter variable will lead to a ReferenceError: i is not defined. On the other hand, omitting the step will cause an infinite loop, since the value of i will never change and is always less than 11.
In a for loop, the step can be of any size and can involve valid operation:
The above code prints out numbers backwards from 1024, each time dividing k by 2, as long as k is greater than 1.
The counter variable can also be of any type, and you can declare multiple counter variables:
Here i increases by 1 each time from 0 to 4 (stopping at 5) while j decreases by 3 each time starting from 15 until the loop ends.
A loop involving strings:
This loop prints every 3rd character of str, starting from the 1st one. The same result can be achieved with a for loop:
Similar results can be achieved with an array.
In the next section, we'll look at functions, their types, and how to declare and use them.
Let's get started with some styling. But what styles do we add?
Why not start with the background color of the page?
You'll notice how the white colored page doesn't match up with the grey background of the html-logo picture. There's two ways to fix this: either edit the image so it has a transparent background, or change the background color of the document to match the image. The second one is easier, so let's do that for now.
First, use the Eyedropper to get the hex code of the color of the image by clicking anywhere on the background of the image with the eyedropper active. You'll find that it is #25272a.
Now, go to the styles.css file and add this snippet:
Go back and refresh the page. You should see the page background color change to match that of the image.
We have a new problem now. You can't see the text! So let's change the text color. This is done by assigning a value to the color property.
honeydew is one of the valid colors recognized by browsers, and is a very light shade of creamy-white-yellow. Feel free to choose a different text color though, as long as it makes the text visible.
Now I want to change the font. There's nothing wrong with it, I just want a different one. I like Trebuchet MS, but you can pick any font you want. To assign a font, the font-family property has to be used:
This changes the font to Trebuchet sans-serif. Reload the page to see the changes.
You'll notice that when a property is added to the <body> tag, it changes the values for all tags inside it (i.e. I did not have to add styling specifically for the <p> tags and the divs and headings etc.). This is because, when styling is applied to an element, the same properties are inherited by all elements nested within it. This is why, if I want to change a property for every element on the page, I just need to assign it to the <body> tag.
I can later overrule a property by giving a more specific selector (i.e. I can add a new color: red to the <p> tag, which makes it so that the <p> tags have red text but the rest of the page remains as-is).
You'll notice that the hyperlinks have not changed color, even though we specified a different color for the text. This is because the <a> tag has a different default color that overrules any color specified earlier. We need to define a new rule to change the color specifically for the <a> tag
Note that hyperlinks are often colored differently from the rest of the text to highlight them.
In the CSS file, add this new rule below the body tag rule:
Now reload the page, and you'll see the links are colored yellowish-green.
Let's go to the html-logo image. You'll see that it is quite large, and we want to resize it to make it smaller. One way is to give it a static/absolute height and width:
You'll see the image get smaller, but it also means you need to "hardcode" the values. You could use relative measurements so that the image gets bigger/smaller according the screen size, but that is left as an exercise.
Remember our "long-div" and "short-div" classes? Let's use those to assign some properties.
For the "long-div", I might want the text to be italicised, like this. I can use the font-style property for this:
As for the "short-div", I want something called "small-caps", which is when the font appears in all capital letters but sized according to the smaller letters. See for yourself with the font-variant property:
So far, these are the styles we have added:
The page should look like this:
Next, we'll continue adding some style to the buttons, and introduce pseudoelements and pseudoclasses.
It is very important that these steps are done correctly to ensure that you can follow along.
To follow along this guide, you need to install the following:
To ensure that everything is working, run the following commands:
You should see text like this:
Once Git is setup on your local machine, do some initial configuration:
Then, setup Github as such:
Create a if you don’t have an account
Connecting to
Generate a new SSH key
Add the SSH key to Github
To verify that Github is working, refer to and run the given command (ssh -T [email protected]) to ensure that your SSH connection is working correctly.
It is very important that these steps are done correctly to ensure that you can follow along.
To follow along this guide, you need to install the following:
To ensure that everything is working, run the following commands:
You should see text like this:
Once Git is setup on your local machine, do some initial configuration:
Then, setup Github as such:
Create a if you don’t have an account
Connecting to
Generate a new SSH key
Add the SSH key to Github
To verify that Github is working, refer to and run the given command (ssh -T [email protected]) to ensure that your SSH connection is working correctly.
Git relies on the core concept of a repository, which is essentially a parent folder that Git is added to to monitor the changes of the folder and its contents (including sub-folders).
These repositories can exist on both your local machines or remotely on an external server (or self-hosted). This guide will look at both instances.
Github is an example of a hosted remote Git server where you can create remote repositories and work on them locally (while pushing changes remotely, hence the "decentralized" nature of Git).
Think of it like having two versions of a Google Docs. When you are editing your document in a train for example, you might lose connectivity, and you'll have an offline copy which is different from the online copy (the source of truth)
By default, Git does not know what files it should be including in a snapshot (and this is a good thing because we don't want Git to just add every file as they may contain sensitive information).
This is where the "three areas" concept comes into play. It is often good to think of your projects with Git as three separate concepts:
Working directory: where your codebase actually resides
Operators in JavaScript are symbols that can perform operations on certain values and data types. The most basic example are the arithmetic operators: +, -, * and /. Operators each have a corresponding operation, can be binary or unary, and can accept a fixed set of values
Arrays, as mentioned are any ordered set of values separated by commas and enclosed in square brackets ([]).
There are two ways to add JS code into HTML pages. The first way is to make use of the <script> tag:
The second way is to write the code in a separate file, save it as with the .js file extension, and import it using the <script> tag:
This is the recommended method which we will stick to for this guide, but note that for shorter programs (3-4 lines long) it is sometimes easier to use the first method.
Pull requests are the cornerstone of collaborative workflows.
Pull requests are similar to for a set of changes made on a separate branch (on the same or different remote repository). They allow other contributors/developers to share their comments about the changes made and allows the creator of the pull request to improve their changes before they are merged into the main branch (or any branch for that matter).
While you may be able to push directly to a remote repository (provided that you have been added as a ), it opens up the possibility of having conflicting changes/overriding changes.
To create a pull request, you first have to push a local branch to a remote repository. For now, we will work with the same repository you had created earlier.
In CSS, there are many ways to define colors. We'll look at 4 popular ways to do so in this section.
There is a long list of colors that are known to all browsers by name, and these colors can be directly assigned as color values for properties like color (color of the text) or background-color. Of course there are the more "common" colors like red, orange, black
JavaScript is dynamically-typed and weakly-typed language, meaning you do not need to declare a variable's type in advance, you can reassign a variable to any type, and implicit type conversion occurs automatically when possible.
The following page has some of the datatypes in JavaScript.
Syntax:
* * * * * command(s)
- - - - -
| | | | |
| | | | ----- Day of week (0 - 7) (Sun=0 or 7)
| | | ------- Month (1 - 12)
| | --------- Day of month (1 - 31)
| ----------- Hour (0 - 23)
------------- Minute (0 - 59)
Example:
*/5 * * * * /path/to/script.sh # Run every 5 minutes#!/bin/bash
# Replace these with your actual Telegram bot token and chat ID
TELEGRAM_BOT_TOKEN="YOUR_BOT_TOKEN"
CHAT_ID="YOUR_CHAT_ID"
TELEGRAM_API_URL="https://api.telegram.org/bot$TELEGRAM_BOT_TOKEN/sendMessage"
# Replace this with your actual curl command that fetches the JSON weather data
response=$(curl -s "https://api.data.gov.sg/v1/environment/24-hour-weather-forecast")
# Define an array of rain-related keywords
rain_keywords=("Rain" "Showers" "Thundery Showers" "Heavy Thundery Showers")
# Extract the general weather forecast without jq
general_forecast=$(echo "$response" | grep -o '"forecast":"[^"]*' | sed 's/"forecast":"//')
# Initialize a flag to check if rain is found
rain_found=0
# Check for rain-related keywords in the general forecast
for keyword in "${rain_keywords[@]}"; do
if [[ "$general_forecast" == *"$keyword"* ]]; then
rain_found=1
break
fi
done
# If rain is detected, send an alert
if [ "$rain_found" -eq 1 ]; then
message="Weather Alert: Rain expected! General forecast is '$general_forecast'. Stay prepared!"
# Send the message to the Telegram bot
curl -s -X POST $TELEGRAM_API_URL \
-d chat_id=$CHAT_ID \
-d text="$message" > /dev/null
echo "Alert sent: $message"
else
echo "No rain expected. General forecast is '$general_forecast'."
fi
git branch <branch name>git checkout <branch name>git checkout -b <branch name>git checkout <branch name>git checkout <commit-hash/branch name>git checkout main
git merge feature-Asudo apt-get install jqbrew install jq#!/bin/bash
# Replace this with your actual curl command that fetches the JSON response
response=$(curl -s "https://arrivelah2.busrouter.sg/?id=18331")
# Extract the 'time' field of the next bus using jq
next_bus_time=$(echo "$response" | jq -r '.services[0].next.time')
# Convert current time and the bus time to epoch for comparison
current_time=$(date +%s)
bus_arrival_time=$(date -d "$next_bus_time" +%s)
# Calculate time difference in seconds
time_diff=$((bus_arrival_time - current_time))
# Convert time difference to minutes and seconds
minutes=$((time_diff / 60))
seconds=$((time_diff % 60))
# Display the result
if [ "$time_diff" -gt 0 ]; then
echo "Bus number 95 will arrive in $minutes minutes and $seconds seconds."
else
echo "Bus number 95 has already arrived or will arrive shortly."
fi
#!/bin/bash
# Replace these with your actual Telegram bot token and chat ID
TELEGRAM_BOT_TOKEN="YOUR_BOT_TOKEN"
CHAT_ID="YOUR_CHAT_ID"
TELEGRAM_API_URL="https://api.telegram.org/bot$TELEGRAM_BOT_TOKEN/sendMessage"
# Replace this with your actual curl command that fetches the JSON weather data
response=$(curl -s "https://api.data.gov.sg/v1/environment/24-hour-weather-forecast
")
# Define an array of rain-related keywords
rain_keywords=("Rain" "Showers" "Thundery Showers" "Heavy Thundery Showers")
# Extract the general weather forecast using jq
general_forecast=$(echo "$response" | jq -r '.items[0].general.forecast')
# Initialize a flag to check if rain is found
rain_found=0
# Check for rain-related keywords in the general forecast
for keyword in "${rain_keywords[@]}"; do
if [[ "$general_forecast" == *"$keyword"* ]]; then
rain_found=1
break
fi
done
# If rain is detected, send an alert
if [ "$rain_found" -eq 1 ]; then
message="Weather Alert: Rain expected! General forecast is '$general_forecast'. Stay prepared!"
# Send the message to the Telegram bot
curl -s -X POST $TELEGRAM_API_URL \
-d chat_id=$CHAT_ID \
-d text="$message" > /dev/null
echo "Alert sent: $message"
else
echo "No rain expected. General forecast is '$general_forecast'."
ficurl https://arrivelah2.busrouter.sg/?id=18331#!/bin/bash
# Replace this with your actual curl command that fetches the JSON response
response=$(curl -s "https://arrivelah2.busrouter.sg/?id=18331")
# Extract the 'time' field of the next bus using jq
next_bus_time=$(echo "$response" | jq -r '.services[0].next.time')
# Convert current time and the bus time to epoch for comparison
# TODO: Implement this (hint: figure out how to use date +%s)
current_time=
bus_arrival_time=
# Calculate time difference in seconds
time_diff=
# Convert time difference to minutes and seconds
minutes=$((time_diff / 60))
seconds=$((time_diff % 60))
# Display the result
if [ "$time_diff" -gt 0 ]; then
echo "Bus number 95 will arrive in $minutes minutes and $seconds seconds."
else
echo "Bus number 95 has already arrived or will arrive shortly."
fiecho $PATHcp bus.sh /usr/local/bin/buscurl https://api.data.gov.sg/v1/environment/24-hour-weather-forecast | jq#!/bin/bash
# Replace these with your actual Telegram bot token and chat ID
TELEGRAM_BOT_TOKEN="YOUR_BOT_TOKEN"
CHAT_ID="YOUR_CHAT_ID"
TELEGRAM_API_URL="https://api.telegram.org/bot$TELEGRAM_BOT_TOKEN/sendMessage"
# Replace this with your actual curl command that fetches the JSON weather data
response=$(curl -s "https://api.data.gov.sg/v1/environment/24-hour-weather-forecast
")
# Define an array of rain-related keywords
rain_keywords=("Rain" "Showers" "Thundery Showers" "Heavy Thundery Showers")
# Extract the general weather forecast using jq
general_forecast=$(echo "$response" | jq -r '.items[0].general.forecast')
# Initialize a flag to check if rain is found
rain_found=0
# Check for rain-related keywords in the general forecast
# TODO: Try implementing this! Your code should follow this general idea:
# for each keyword in rain_keywords check if the general forecast matches it. If
# so, set rain_found to 1
# If rain is detected, send an alert
if [ "$rain_found" -eq 1 ]; then
message="Weather Alert: Rain expected! General forecast is '$general_forecast'. Stay prepared!"
# Send the message to the Telegram bot
curl -s -X POST $TELEGRAM_API_URL \
-d chat_id=$CHAT_ID \
-d text="$message" > /dev/null
echo "Alert sent: $message"
else
echo "No rain expected. General forecast is '$general_forecast'."
fi<magic string> <command>0 9 * * * /path/to/script/weather.shlazygitmkdir recipe_repo
cd recipe_repo
git initjournalctl -fjournalctl -n 20journalctl -u sshdjournalctl -bjournalctl --since "15 minutes ago"let button = document.querySelector("button");button.onclick; // function onclick(event)<button onclick="alert('Hello!')" id="hello">Say hello</button>button.id; // "hello"element.attributeName; // returns attribute value// the class attribute uses .className, not .class
button.className; // ""
button.class; // undefinedbutton.hidden; // falsebutton.className = "random-class"; // feel free to plug in your class namebutton.className = "";button.hidden = true;button.hidden = false;button.innerHTML; // "Say Hello"button.innerText; // "Say Hello"<button onclick="alert('Hello!')" id="hello">
<span>Say hello</span>
</button>let button = document.getElementById("hello");
button.innerHTML; // "<span>Say Hello</span>" with some whitespace around it
button.innerText; // "Say Hello"git remote add origin [email protected]:<github username>/<repository name>.gitgit branch -M maingit push -u origin maincd ../git clone [email protected]:<github username>/<repository name>.git another-folder/cd another-folder/
cat hello.txtlet newElement = document.addElement("a");git checkout -b branch-A
git checkout main
git checkout -b branch-Bgit add <file/directory>
Stage changes (use . to add everything)
git commit -m "<message>"
Commit staged changes
git reset <file>
Unstage changes
git restore <file>
Discard changes
git revert <commit>
Undo changes
git status
View tracked/untracked files
git reflog
List all local changes
git log
List repository history
git show
Show current commit
git diff <A> <B>
Show differences between A and B (A and B can either be commits or branches)
git diff --staged
Show changes made by staged files
git branch
List branches
git checkout <branch>
Switch into a different branch
git checkout -b <branch>
Create and switch into a new branch
git branch -d <branch>
Delete branch
git merge <branch>
Applies the changes from <branch> into the current branch
git clone <url>
Copy a remote repository
git fetch
Fetch changes from the remote
git pull
Pull updates from remote to local (default iis via merge, use --rebase to specify rebase)
git push
Push changes to remote
git <command> -h
Show command usage
git <command> --help
Show detailed help page
git blame <file>
Show who last modified each line in a file
git init
Initialise repository
git config <option> <value>
e.g. git config --global user.email
git versiongit version 2.42.0git config --global user.email "<your email>"git config --global user.name "<your name>"git versiongit version 2.42.0git config --global user.email "<your email>"git config --global user.name "<your name>"Staging area: set of files that you want to include in a snapshot
Repository: local/remote repository storing metadata about the project and Git
By default, all of your files reside in the working directory and are not yet added to the staging area. If you want a file included in the staging area, then you must first add it to the staging area (we will cover how this happens later on).











border-radius: 15px - this allows me to curve the corners of the border of an element. In this case, it has a radius of curvature of 15 pixels.
margin-left: 3px - this adds 3 pixels worth of space on the left of the element (remember what a margin is?)
padding: 5px 8px 5px 8px; - this adds padding on the inside of the button. Going clockwise, it gives 5px padding at the top, 8px on the right, 5px at the bottom and 8 px on the left.










ReferenceError: i is not defined in grey text, because the program has not yet run but the console knows what the result is going to be. This tells me as the programmer to look for the problem and fix it before running the code.


Now, run the following command to view the status of your repository:
You should see the following:
Recall that in Fundamental Concepts, Git does not automatically add files to a snapshot as it does not know exactly what you want. So we want to tell Git that we want hello.txt in the snapshot.
You may notice that the git status message states that hello.txt is untracked. Untracked files are those that have never been registered with Git before. They are often new files that have been added to the repository and have not existed in any snapshots.
Files that have been added to a snapshot before are considered "tracked" and Git knows to look out for changes between snapshots.
As discussed in Introducing the commit, a file from the working directory needs to be explicitly added to the staging area for a snapshot to include it. By default, an untracked file that is added to a snapshot becomes tracked for future snapshots.
To add hello.txt to the staging area, use the following command:
Then, use git status to view the status of your repository again:
Notice that now, instead of stating that your file is untracked, Git is indicating that the changes have not committed. This is a sign that the file(s) have been tracked and added to the snapshot.
Now, to take the snapshot (make the commit), you can use the following:
The -m flag is used to specify the commit message. Every commit has an accompanying message that you can use to indicate what the commit contains/entails.
There you have it! You have made a local repository and created a snapshot! We will now look at how we can integrate Github with your local repository!
Staging area: set of files that you want to include in a snapshot
Repository: local/remote repository storing metadata about the project and Git
By default, all of your files reside in the working directory and are not yet added to the staging area. If you want a file included in the staging area, then you must first add it to the staging area (we will cover how this happens later on).


A value that an operator performs an operation on is called an operand
The following section has a few tables of operators, their corresponding operations and their accepted operand(s).
These are the classic addition, subtraction, multiplication and division operators that are nearly standard across languages:
+ for addition
- for subtraction
* for multiplication
and / for division
These are all binary operators that take two numbers in as operands.
As an aside, the + operator also allows for string concatenation and array concatenation:
The ** operator allows for exponents:
The % operator gets the remainder after division
These operators allow for, as you can guess, comparing two values. They are binary operators that return a boolean value. The symbols are as follows, with their usage with numbers being as expected:
a > b returns true if a is greater than b
a >= b returns true if a is greater than or equal to b
a < b returns true if a is lesser than b
a <= b returns true if a is lesser than or equal to b
a !== b returns true if a is not equal to b
The comparison operators can also be used to compare two strings; they return true or false based on a character-by-character comparison of the two strings. An example is below:
The way the above works is that the first character in each string are compared. If they are equal, then the next character in each string are compared. Eventually, when two characters in a string are unequal, then they are compared according to the operator (after internally converting the characters to their numeric ASCII values) and a boolean value returned accordingly.
If one string ends before the other and all characters up till that point are the same, the longer string is deemed the "greater" value:
The equality operation has two possible operators, each of which function slightly differently.
The first is the "triple-equals" or the "type-strict equals" operator: ===. This operator works as you would expect:
The second is the "double-equals" or the "type-lax equals" operator: ==. This operator works similar to the type-strict equals operator, except that it implicity converts the two operands to the same datatype:
These operators can take one or two boolean expressions and returns a boolean value. They are often combined with comparison operators.
The AND operator, && (double ampersand symbol), is a binary operator that compares two boolean values expressions and returns true only if both the expressions evaluate to true. If either one of the expressions evaluate to false, then && returns false.
The OR operator, || (double bars), is a binary operator that compares two boolean expressions and returns true if either one of them evaluates to true. If both the expressions evaluate to false, then the operator returns false.
The NOT operator, !, is a unary operator that reverses a boolean expression's value. If the expression evaluates to true, it returns false; if the expression evalutes to false then it returns true. It does not change the original expression's value
As seen before, this operator has the symbol = and allows you to assign a value to a variable or constant. Example:
This class of operators are formed by combining a binary logical or mathematical operator with the value assignment operator: op=. They can then be used as an assignment operator, assigning a value to a variable while performing the binary operation on both. Essentially: a op= b is the same as a = a op b. Some examples are below:
These two unary operators allow to increment and decrement numeric variable values by 1. The increment operator is a double plus (++) and the decrement operator is a double minus (--). They can be placed either behind or in front of a variable name, and are accordingly called pre- or post-increment or decrement.
These operators allow you to perform bitwise operations like bitwise AND, OR and NOT on values. To see how they work in detail, visit this site.
Recall that JavaScript is a weakly-typed language. This means that you can end up with situations like this:
This is one of the reasons it is important to maintain type consistency and stronger typing in your code. It is also wacky issues like this that contributed to the popularity of languages like TypeScript, a strongly-typed version of JavaScript.
Now that we have covered data types and operations concerning these datatypes, the next section will cover a few more coding constructs of JavaScript.
enterAs with strings, you can access a particular element in an array using square bracket indexes:
These indexes can be chained to access particular elements in nested arrays:
You can also reassign values in an array in this way:
The length of an array is accessible through its length property:
To add values to the end of an array, there are 2 ways. The first way is to assign values using indexing.
In general, arr[arr.length] = x will add x to the end of the array.
It is possible to use the wrong index an accidentally reassign a value. It is also possible to pass in an index beyond the length of an array. In this case, JavaScript will create "gaps" in the array filled with the value undefined:
The second way to add arrays prevents any chance of using the wrong index, and involves the push method.
The method takes in one argument and pushes it to the back of the array. It also returns the new length of the array after insertion. push is a variadic function, so you can push multiple values at once:
There is no easy way to insert elements into a specific position in the array. One way is to copy over the elements into a new array, making sure to insert the new value you want to at the right index, and then use the new array. Another way is to use the splice method, as detailed in this StackOverflow post.
The best way to remove elements from an array is to use the pop method. It is a nullary function that removes the last element in the array and returns the element it removed.
Again, it is not easy to remove a specific element; the splice and indexOf methods will be needed.
To check if an element exists in an array, the indexOf method is useful. It returns -1 if the element is not present, otherwise it returns the index of the element.
indexOf will not work when passing in another array as an argument:
This is because indexOf uses the === operator, which returns false for two arrays even if they have the same elements unless they refer to the same object:
To get a particular portion of an array, you can use the slice method. It takes in a start and end and returns all the elements from the start index to just before the end index:
Omitting the end value will slice till the end of the array:
If the end value is greater than or equal to the start value then it returns an empty array:
If only one argument k is provided and the argument is negative, then the last |k| elements are returned:
There are two ways to sort an array. The first is the sort method. It takes no arguments, has no return value and sorts the array it is called on:
The second way is to use the toSorted method. This does not change the original array and instead returns a new sorted array:
Both these methods sort the elements of the array according to their natural ascending order as defined by JavaScript. To sort an array in descending order, you can call one of the sort or toSorted methods and then reverse the array.
Like with sorting, there are two ways to reverse the ordering of elements in an array. The first is the reverse method, which works like the sort method (i.e. it changes the original array):
The second way is to use the toReversed method which is similar to the toSorted method (i.e. it does not modify the original array and returns a new one instead):
To sort an array in descending order, combine the sorting and reversing methods:
A for or while loop can be used to iterate through the elements of an array and perform an action. For this next bit let's try to go through the elements of an array nums and add the squares of each element to a new array. Below is the code for this:
But there is a better way of doing this. Arrays have a forEach method that takes in a unary function and applies it to every element in the array. It is common to use lambda expressions and anonymous functions for this. The above code can be shortened to:
If the unary function passed in to forEach returns any value, the value is ignored and "lost" i.e. cannot be used by any code you write
There is a full list of array methods here; this section would be far too long if I tried covering them here.
Now we're done covering the basics of JavaScript. The next section will move on to HTML, followed by CSS, and then back to JavaScript where we'll combine the 3 to make a functional frontend.
The Number datatype in JavaScript is a primitive datatype that represents, surprise surprise, a number. Unlike many other languages, JavaScript does not explicitly differentiate between integers and floats and any such value just has the type Number.
Some examples of Numbers are 0, 1, -1.32, and Infinity. Infinity is a special value used by JavaScript to indicate an infinite value, such as the result of dividing anything by 0.
JavaScript numbers can be formatted in scientific notation as follows:
The boolean datatype in JavaScript is another primitive datatype that can take on one of two values: true or false. These values are useful in any code involving predicates.
The String datatype in JavaScript is a compound datatype that is essentially any text enclosed in quotation marks (double or single are both fine, as long as the start and end quotation marks match).
Strings are immutable, which means that any function that takes in a string does not change its value; it instead returns a new string.
A particular character in a string can be accessed using square brackets ([]) wrapped around a number representing the position of the character, called the index. The first character is said to be at index 0. There are other ways to access a particular character in a string which will be discussed later.
The length of a string can be found using the .length property:
The Array datatype in JavaScript is compound mutable datatype that represents an ordered collection of values. The values can be of any type, and values of different types can be assigned to the same array without an issue. An array is declared by separating the values with commas (,) and enclosing the list with square brackets []. As with strings, you can get the length of the array with the .length property, and access a particular value with square bracket indices. Arrays can also be nested if need be.
As you saw above, indexes can be chained in the case of nested arrays.
An Object in JavaScript refers to a container variable that contains many values. By this definition, an array is an object. In JavaScript, objects are declared and assigned using JavaScript Object Notation, or JSON. You can imagine an object as a set of key-value pairs.
As you can see, the key-value pairs are separated by commas (,), enclosed by curly braces ({}) and the pairs are formatted as key: value. Note that the key does not have to be enclosed in quotation marks as long as it has no spaces, operators(+, = etc.) and other reserved characters or keywords.
To access a value assigned to a key (formally called a property in JS) you can either use square brackets ([]) or dot notation:
Note that with dot notation, you do not need quotation marks whereas with square bracket notation you do need quotation marks.
In JavaScript, undefined is a primitive value that is returned by functions or expressions that have no return value as mentioned in the previous section. It is also a default value that is assigned by JavaScript to a variable that has been declared without assignment. For instance,
will assign undefined to x until a different value is assigned later on.
undefined can also be assigned to a variable, strangely enough.
This is another primitive value that can be assigned to variables or returned by functions or expressions. It is used to represent an empty, non-existent value rather than as a flag to indicate no return value.
This primitive value stands for "Not a Number" and is used by JavaScript to indicate that the result of an expression or function is not a number. One way to get this is to divide a number by a string that does not have numeric digits. While most languages would throw an error or exception of some kind in such situations, JavaScript instead handles it by returning NaN to indicate that the expression was erroneous.
There are two methods to determine the type of a variable in JavaScript. The first method is more common, and involves using the typeof function which takes in any value as an argument and returns a string with the data type.
As you'll notice, typeof returns "object" when you pass in an array, even though they are essentially different. So the second way is to use the instanceof keyword. Here's how it works:
Now we can differentiate between objects and arrays. But the instanceof keyword has its own issues:
Whelp, 10 isn't a number apparently. Thanks instanceof.
Moral of the story: Based on what you are trying to achieve, make sure you use the right technique to determine a variable's type. For instance, if I had to validate some input to make sure it is a number, I would use the typeof operator and check if it outputs "number". But if I had to instead check if my input is an array specifically, I would need to use the instanceof keyword as above.
typeof(null) returns "object"
typeof(NaN) returns "number"
Next, we'll take a look at operations and some operators in JavaScript, along with what datatypes can be used with them.
git branch -vgit branch -d <branch name>git branch -m <new branch name>// We first create the general-purpose allocator.
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
// Then we get the general `std.mem.Allocator` struct from it.
// This is what we'll call to (de)allocate memory.
const allocator = gpa.allocator();
// Let's allocate 16 bytes of memory.
const some_bytes: []u8 = try allocator.alloc(u8, 16);
// Maybe put a string into it.
std.mem.copyForwards(u8, some_bytes, "Hello, my world!");
// What's in the memory?
std.debug.print("{s}\n", .{some_bytes});
// Wait, don't we need to free the memory???std.debug.print("normal 1\n", .{});
defer std.debug.print("defer 1\n", .{});
std.debug.print("normal 2\n", .{});
defer std.debug.print("defer 2\n", .{});
std.debug.print("normal 3\n", .{});
defer std.debug.print("defer 3\n", .{});normal 1
normal 2
normal 3
defer 3
defer 2
defer 1// We first create the general-purpose allocator.
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer _ = gpa.deinit();
// Then we get the general `std.mem.Allocator` struct from it.
// This is what we'll call to (de)allocate memory.
const allocator = gpa.allocator();
// Let's allocate 16 bytes of memory.
const some_bytes: []u8 = try allocator.alloc(u8, 16);
// Maybe put a string into it.
std.mem.copyForwards(u8, some_bytes, "Hello, my world!");
// What's in the memory?
std.debug.print("{s}\n", .{some_bytes});
// LEAKKKKKKKK// We first create the general-purpose allocator.
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer _ = gpa.deinit();
// Then we get the general `std.mem.Allocator` struct from it.
// This is what we'll call to (de)allocate memory.
const allocator = gpa.allocator();
// Let's allocate 16 bytes of memory.
const some_bytes: []u8 = try allocator.alloc(u8, 16);
defer allocator.free(some_bytes);
// Maybe put a string into it.
std.mem.copyForwards(u8, some_bytes, "Hello, my world!");
// What's in the memory?
std.debug.print("{s}\n", .{some_bytes});
// Phew, no more leaks!var buf: [16]u8 = undefined;
// We first create the fixed buffer allocator.
var fba = std.heap.FixedBufferAllocator.init(&buf);
// Then we get the general `std.mem.Allocator` struct from it.
// This is what we'll call to (de)allocate memory.
const allocator = fba.allocator();
// Let's allocate 16 bytes of memory.
const some_bytes: []u8 = try allocator.alloc(u8, 16);
defer allocator.free(some_bytes);
// Maybe put a string into it.
std.mem.copyForwards(u8, some_bytes, "Hello, my world!");
// What's in the memory?
std.debug.print("{s}\n", .{some_bytes});var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer _ = gpa.deinit();
const allocator = gpa.allocator();
// Initialise the ArrayList, passing in the allocator it will use to dynamically
// allocate memory for its items.
var some_array_list = std.ArrayList(i32).init(allocator);
defer some_array_list.deinit(); // REMEMBER TO DEINIT WHAT YOU INIT!!
// Append some items to the list. Notice how we need to use `try` here, since the
// memory allocation can fail.
try some_array_list.append(3);
try some_array_list.append(8);
try some_array_list.append(4);
try some_array_list.append(39);
// Remove some items in the list. Notice how we don't allocate memory here, so we
// don't need to use `try`. But we need to assign the result to something.
_ = some_array_list.orderedRemove(1);
// Iterate through the array list.
for (some_array_list.items) |item| {
std.debug.print("array list item: {}\n", .{item});
}var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer _ = gpa.deinit();
const allocator = gpa.allocator();
// Initialise the ArrayListUnmanaged. Notice we don't need to pass any allocator here,
// instead we just need to pass it in `deinit`.
var some_array_list = std.ArrayListUnmanaged(i32){};
defer some_array_list.deinit(allocator); // REMEMBER TO DEINIT WHAT YOU INIT!!
// Append some items to the list. Notice how we need to pass the allocator here, and also
// use `try` here, since the memory allocation can fail.
try some_array_list.append(allocator, 3);
try some_array_list.append(allocator, 8);
try some_array_list.append(allocator, 4);
try some_array_list.append(allocator, 39);
// Remove some items in the list. Notice how we don't allocate memory here, so we
// don't need to use `try` or pass any allocator. But we need to assign the result
// to something.
_ = some_array_list.orderedRemove(1);
// Iterate through the array list.
for (some_array_list.items) |item| {
std.debug.print("array list item: {}\n", .{item});
}button {
background-color: black;
color: white;
border: solid rgb(160, 78, 146) 3px;
border-radius: 15px;
margin-left: 5px;
padding: 5px 8px 5px 8px;
}#useless-button {
border-radius: 0;
}element:pseudoclass { /* single colon */
/* styles */
}
element::psuedoelement { /* double colon */
/* styles */
}button:hover {
font-weight: bold;
border-radius: 5px;
}body {
background-color: #25272a;
color: honeydew;
font-family: 'Trebuchet MS', sans-serif;
}
a {
color: yellowgreen;
}
#html-logo {
width: 300px;
height: 176px
}
.long-div {
font-style: italic;
}
.short-div {
font-variant: small-caps;
}
button {
background-color: black;
color: white;
border: solid rgb(160, 78, 146) 3px;
border-radius: 15px;
margin-left: 5px;
padding: 5px 8px 5px 8px;
}
#useless-button {
border-radius: 0;
}
button:hover {
font-weight: bold;
border-radius: 5px;
}# .github/workflows/ci.yml
name: CI/CD Pipeline
on: [pull_request, workflow_dispatch]
jobs:
linting:
runs-on: ubuntu-latest
steps:
- name: Fetch repository
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'yarn'
- name: Install dependencies
run: |
yarn
- name: Lint code
run: |
yarn lint
unit-tests:
runs-on: ubuntu-latest
steps:
- name: Fetch repository
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'yarn'
- name: Install dependencies
run: |
yarn
- name: Run unit tests
run: |
NODE_ENV=production yarn test - name: Lint code
run: |
yarn lintgit add .github/workflows/ci.yml
git commit -m "Add linting step"
git push -u origin main if (true) {
useEffect(() => console.log("hi"), [])
}newElement.className = "purple-link";
newElement.href = "https://orbital.comp.nus.edu.sg/";newElement.setAttribute("className", "purple-link");
newElement.setAttribute("href", "https://orbital.comp.nus.edu.sg/");let newElement = document.addElement("a");
newElement.setAttribute("className", "purple-link");
newElement.setAttribute("href", "https://orbital.comp.nus.edu.sg/");
newElement.innerText = "Link to Orbital page";let newElement = document.addElement("a");
newElement.setAttribute("className", "purple-link");
newElement.setAttribute("href", "https://orbital.comp.nus.edu.sg/");
let text = document.createTextNode("Link to Orbital Page");
newElement.appendChild(text); // see below on how to add elements to the pagelet tag1 = document.createElement("div"); // new element we make
let tag2 = document.getElementById("tag2"); // already present on page
document.body.insertBefore(tag1, tag2); // tag1 inserted before tag2 in the bodylet newChild = document.createElement("div"); // new element to make
let otherChild = document.getElementById("other-child"); // element to insert before
let parent = document.getElementById("parent"); // element to insert inside of
parent.insertBefore(newChild, otherChild);<div id="parent">
<div>Thing</div>
<div>Thing again</div>
<div id="other-child">Something</div>
<div>Something again</div>
</div><div id="parent">
<div>Thing</div>
<div>Thing again</div>
<div></div> <!-- the new element we made -->
<div id="other-child">Something</div>
<div>Something again</div>
</div>let newChild = document.createElement("div"); // new element to make
newChild.innerText = "Child 5"; // add some text
let parent = document.getElementById("parent"); // element to insert inside of
parent.appendChild(newChild);<div id="parent">
<div>Child 1</div>
<div>Child 2</div>
<div>Child 3</div>
<div>Child 4</div>
</div><div id="parent">
<div>Child 1</div>
<div>Child 2</div>
<div>Child 3</div>
<div>Child 4</div>
<div>Child 5</div>
</div>let elementToRemove = document.getElementById("remove-this");let parentElement = document.getElementById("parent");parent.removeChild(elementToRemove);let newChild = document.createElement("div");
let oldChild = document.getElementById("old-child");
let parent = document.getElementById("parent");
parent.replaceChild(newChild, oldChild);git checkout branch-A
vim hello.txt
git add hello.txt
git commit -m "branch A changes"git checkout branch-B
vim hello.txt
git add hello.txt
git commit -m "branch B changes"git checkout main
git merge branch-A
git merge branch-BAuto-merging hello.txt
CONFLICT (content): Merge conflict in hello.txt
Automatic merge failed; fix conflicts and then commit the result.On branch main
Your branch is ahead of 'origin/main' by 1 commit.
(use "git push" to publish your local commits)
You have unmerged paths.
(fix conflicts and run "git commit")
(use "git merge --abort" to abort the merge)
Unmerged paths:
(use "git add <file>..." to mark resolution)
both modified: hello.txt
no changes added to commit (use "git add" and/or "git commit -a")<<<<<<< HEAD
Hello
=======
Hello universe
>>>>>>> branch-BHello
Hello universegit add hello.txtOn branch main
Your branch is ahead of 'origin/main' by 1 commit.
(use "git push" to publish your local commits)
All conflicts fixed but you are still merging.
(use "git commit" to conclude merge)
Changes to be committed:
modified: hello.txtfunction Component() {
let counter = 0;
// DON'T DO THIS - mutating counter doesn't tell React to re-render
return <Button onPress={() => counter += 1} />;
}function Component() {
const [counter, setCounter] = useState(0);
// This is correct - the counter value is updated in the next render
return <Button onPress={() => setCounter(counter + 1)} />;
}import React, { useState } from 'react';
import { View, TextInput } from 'react-native';
const TextInputComponent = () => {
const [text, setText] = useState('');
return (
<View>
<TextInput
style={styles.input}
value={text}
onChangeText={setText}
placeholder="Type here"
/>
</View>
);
};
import React, { useState } from 'react';
import { View, Button, Modal, Text } from 'react-native';
const ModalComponent = () => (
<View>
<Text>This is a modal!</Text>
</View>
);
const App = () => {
const [isModalOpen, setIsModalOpen] = useState(false);
const toggleModal = () => {
setIsModalOpen(!isModalOpen);
};
return (
<View >
<Button title="Toggle Modal" onPress={toggleModal} />
{isModalOpen && (
<Modal
transparent={true}
visible={isModalOpen}
onRequestClose={toggleModal}
>
<View>
<ModalComponent />
<Button title="Close Modal" onPress={toggleModal} />
</View>
</Modal>
)}
</View>
);
};if (booleanExpression) {
// do thing 1
} else if (otherBooleanExpression) {
// do thing 2
} else {
// do something else
}let num = prompt("Enter a number"); // recall that this allows the user to input a value
if (num === null) { // prompt() returns null if the user gives no input
alert("You did not enter a number");
} else if (num % 2 === 0) {
alert("The number you entered was even");
} else {
alert("The number you entered was odd");
}predicate ? returnIfTrue : returnIfFalse;// assume num is guaranteed to be a number
num % 2 === 0 ? "Even" : "Odd"; // if num is even then "Even" is returned, else "Odd" is returnedfor (counter; limit; step) {
// do something
}
while (condition) {
// do something
}for (let i = 0; i < 11; i++) {
console.log(i);
}let i = 0;
while (i < 11) {
console.log(i);
i++;
}for (let k = 1024; k > 1; k /= 2) {
console.log(k);
}for (let i = 0, j = 15; i < 5; i++, j -= 3) {
console.log(i, j); // prints the values of i and j separated by a space
}let str = "eowjpfi45ofr;kl143";
let i = 0;
while (i < str.length) {
console.log(str[i]);
i += 2;
}for (let i = 0; i < str.length; i += 2) {
console.log(str[i]);
}body {
background-color: #25272a;
}body {
background-color: #25272a;
color: honeydew;
}body {
background-color: #25272a;
color: honeydew;
font-family: 'Trebuchet MS', sans-serif;
}a {
color: yellowgreen;
}#html-logo {
width: 300px;
height: 176px
}.long-div {
font-style: italic;
}.short-div {
font-variant: small-caps;
}body {
background-color: #25272a;
color: honeydew;
font-family: 'Trebuchet MS', sans-serif;
}
a {
color: yellowgreen;
}
#html-logo {
width: 300px;
height: 176px
}
.long-div {
font-style: italic;
}
.short-div {
font-variant: small-caps;
}echo 'Hello world' >> hello.txtgit statusOn branch main
No commits yet
Untracked files:
(use "git add <file>..." to include in what will be committed)
hello.txt
nothing added to commit but untracked files present (use "git add" to track)git add hello.txtOn branch main
No commits yet
Changes to be committed:
(use "git rm --cached <file>..." to unstage)
new file: hello.txtgit commit -m "First commit"let x = 10;
let y;
y = ++x; // x is 11, and y is also 11
y = x++; // x is 12, but y remains 11 because the increment happens after assignmentlet str1 = "hello";
let str2 = "world";
str1 + str2; // evaluates to "helloworld"
let arr1 = [1, 2];
let arr2 = [3, 4];
arr1 + arr2; // evaluates to [1, 2, 3, 4]let x = 2;
let y = 3;
x ** y; // 8let x = 5;
let y = 3;
x % y; // 2let str1 = "abcd";
let str2 = "abdc";
str1 > str2; // returns false"abcd" > "abc"; // returns true2 === 2; // returns true
2 === 3; // returns false
2 === "abcd"; // returns false
2 === "2"; // returns false
2 === [2]; // returns false2 == 2; // returns true
2 == 3; // returns false
2 == "abcd"; // returns false
2 == "2"; // returns true, because "2" is converted to 2
2 == [2]; // returns true, because [2] is converted to 2
2 == [2, 2]; // returns false, because there is more than one value in the array now2 === 2 && 3 < 4; // true because 2 is equal to 2 AND 3 is less than 4
2 === 2 && 3 > 4; // false because 3 is not greater than 4
2 === 3 && 3 > 4; // false2 === 2 || 3 < 4; // true
2 === 2 || 3 > 4; // true because 2 is equal to 2
2 === 3 || 3 > 4; // false because 2 is not equal to 3 and 3 is not greater than 4let value = 2 === 2 || 3 < 4; // true
!value; // false
value; // still truelet x = 10; // assigning 10 to xlet x = 10;
x += 5; // same as x = x + 5; x is now 15
x -= 2; // same as x = x - 2; x is now 13
x *= 10; // same as x = x * 10; x is now 130
x **= 2; // same as x = x ** 2; x is now 16900
x; // 16900
let y = x > 1000; // 16900 is greater than 1000, so y is true
y &&= (x !== 4); // same as y = y && (x !== 4)
// x is not equal to 4, so the expression in brackets evaluates to true
y; // true && true gives truelet m = 4;
m++; // post-increment, m is now 5
m--; // post-decrement, m is back to 4
--m; // pre-decrement, m is now 3
++m; // pre-increment, m is back to 4 again"10" + 1; // "101" because 1 gets converted to a string and JS performs string concatenation
"10" - 1; // 9 because "10" gets converted to a number and JS performs subtraction
4 / "2"; // 2 because "2" gets converted to a number
"3" * "2"; // 6 because JS converts both operands to numbers
[3] / "10"; // 0.3, same reasonlet arr = [[1, 2, 3], [4, 5, 6]]; // 2D nested array
arr.indexOf([1, 2, 3]); // returns -1 even though [1, 2, 3] is an elementlet arr1 = [1, 2, 3];
let arr2 = [arr1, [4, 5, 6]];
arr2.indexOf(arr1); // returns 0 nowlet arr = [1, 2, 3, 4];
let emptyArr = [];
let nestedArrs = [[1, 2, 3], [4, 5], [6]];
let jumbledArr = [2, true, false, "2.43", [4, ["hello"], []]];let arr = [1, 2, 3, 4];
arr[0]; // 1
arr[2]; // 3let nestedArr = [
[1, 2, 3, 4],
[5, 6, 7],
[8, 9]
]
nestedArr[0]; // [1, 2, 3, 4]
nestedArr[0][1]; // 2
nestedArr[2][1]; // 9let nums = [1, 2, 3, 4];
nums[2] = 10;
nums; // [1, 2, 10, 4]arr.length; // 4
nestedArr.length; // 3
nestedArr[2].length; // 2let arr = []; // empty array
arr[0] = 1
arr; // [1]
arr[1] = 2;
arr; // [1, 2]let arr = [1, 2, 3];
arr[4] = 4;
arr; // [1, 2, 3, undefined, 4]let arr = [1, 2, 3];
let l = arr.push(4);
arr; // [1, 2, 3, 4]
l; // 4let arr = [];
let l = arr.push(1, 2, 3, 4, 5, 6);
arr; // [1, 2, 3, 4, 5, 6]
l; // 6let nums = [1, 2, 3, 4];
let x = nums.pop();
nums; // [1, 2, 3]
x; // 4let arr = [1, 2, 3, 4];
arr.indexOf(3); // 2
arr.indexOf(23); // -1let arr = [1, 2, 3, 4, 5, 6, 7, 8, 9];
arr.slice(2, 5); // [3, 4, 5]arr.slice(4); // [5, 6, 7, 8, 9]arr.slice(6, 2); // []arr.slice(-4); // [6, 7, 8, 9]let nums = [3, 4, 6, 1, 2, 5];
nums[3]; // 1
nums.sort();
nums; // sorted to [1, 2, 3, 4, 5, 6]
nums[3]; // 4let nums = [3, 4, 6, 1, 2, 5];
let sortedNums = nums.toSorted();
nums; // still [3, 4, 6, 1, 2, 5]
sortedNums; // [1, 2, 3, 4, 5, 6]let nums = [1, 2, 3, 4, 5, 6];
nums[3]; // 4
nums.reverse();
nums; // [6, 5, 4, 3, 2, 1]
nums[3]; // 3let nums = [1, 2, 3, 4, 5, 6]
let reversedNums = nums.toReversed();
nums; // still [1, 2, 3, 4, 5, 6]
reversedNums; // [6, 5, 4, 3, 2, 1]let nums = [3, 5, 1, 2, 6, 4];
nums.sort()
nums.reverse();
nums; // now [6, 5, 4, 3, 2, 1]let nums = [1, 2, 3, 4, 5];
let squaredNums = []
for (let i = 0; i < nums.length; i++) {
squaredNums.push(nums[i] ** 2);
}
squareNums; // [1, 4, 9, 16, 25]let nums = [1, 2, 3, 4, 5];
let squaredNums = [];
nums.forEach(x => squaredNums.push(x ** 2));
squaredNums; // [1, 4, 9, 16, 25]let x; // no value assigned; by default undefined
x = 10; // x is now 10, a number
x = "hello"; // x is now "hello", a string
x = true; // x is now true, a booleanlet x = 23e3; // x is 23 times 10^3
let y = 4e-5; // y is 4 times 10^-5let str = "Hello world!";let str = "H0LA";
console.log(str[1]); // will print out "0"let str = "orbital";
str.length; // returns 7let arr = [1, -2, 3, true, "random string here", ["another array", 4]];
arr[0]; // returns 1
arr[4]; // returns "random string here"
arr[5]; // returns ["another array", 3]
arr[5][1]; // returns 4
arr.length; // returns 6let myObj = {firstName: "Prakamya", lastName: "Singh", year: 1, isComputingStudent: true};myObj["firstName"]; // returns "Prakamya"
myObj.firstName; // also returns "Prakamya"let x;// Let's declare some variables first
let num = 10;
let otherNum = 9.9;
let str = "some string";
let bool = true;
let arr = [1, false, "hello"];
let obj = {car: "Puma", licensePlate: "SLG2034B", topSpeed: 120};
// Now let's check their data types using the typeof function.
typeof(num); // returns "number"
typeof(otherNum); // still returns "number"
typeof(str); // returns "string"
typeof(bool); // returns "boolean"
typeof(obj); // returns "object"
typeof(arr); // returns "object", not "array"// assume we have our same variables as before
arr instanceof Array; // returns true
arr instanceof Object; // still returns true
obj instanceof Object; // returns true
obj instanceof Array; // returns false// assume we have our same variables as before
num instanceof Number; // returns false
str instanceof String; // returns false
bool instanceof Boolean; // returns falseEvent Listeners in JavaScript are functions that wait and "listen" for events (like clicks) on the page. Once the event happens, they can execute a listener function. These are nullary functions that have no return value.
An event listener can be added either to an element on the page (like a button, a paragraph or a form), or the page itself. The method to add an event listener is addEventListener as is used as follows:
Go back to the html file from before, and remove the button's onclick attribute. Then, open a new file and save it as script.js. Here, write the following code:
The first line will query for the button, and the second line adds an event listener. The event it listens for is "click", and the listener is a lambda expression for an anonymous nullary function.
Save the code, and import it into the html document by adding the following line into the head of the document:
Now save the file, reload the page, click the button and...
Hmmm. Nothing is happening. Go back, check the syntax and spelling. No issues there. Check the browser console, and aha. There's an error: Uncaught TypeError: button is null
Well, we're running the same query that we were running in the Browser console in the previous sections, and that worked fine. What's the issue here?
Let's take a look our html file:
The browser reads the file from top to bottom, rendering elements and styles as it reads them line by line. This creates the issue that the JavaScript code is read and executed before the rest of the document is rendered. So the button we're querying for does not exist by the time the code is read, which means that querying the document for the button returns null, hence the TypeError we see.
The easiest way to fix it is to execute the code after the document is rendered. This involves moving it to the bottom of the file:
Save the file, reload the page, and try clicking the button. It should display a popup with "Hello!" on it. If it doesn't, check for syntax errors or spelling mistakes in the code and in your HTML file, and make sure the file is saved and the page is reloaded.
This is an easy way to fix the issue, and is quite convenient. However, it means that the code is not executed until the entire page is loaded and rendered, including images, stylesheets and other large files. So if there is just one single file or image that is taking time to load, the JS code will not be executed until it is loaded, which could cause some inconvenience as event listeners won't be added in time.
The second option is to wait till the elements of the page are loaded into the DOM (which is done before rendering the elements on the page itself). This will happen much earlier and is faster than waiting for the elements to render. To do this, there is a special event only applicable to the document itself. This event is called "DOMContentLoaded" (case-sensitive). It can be used as below:
This will allow the JS code to be executed much before the page renders, so event listeners and other JS-dependent elements or features are ready as soon as the page is rendered and displayed to the user. Let's apply this to our code:
Now move the <script> tag back into the head of the document, save the file, reload the page on the browser, and the button should work:
Next, we'll make a simple example of querying and updating the DOM using a click counter.
Then, go to the Github page of the repository. You should see this yellow textbox show up:
You can create a pull request directly from the "Compare & pull request" button in the yellow box.
However, we would like to orient you to the Github PR UI a little more. So you can click on the "Pull requests" tab and you will see the following page:
Right now, there isn't a lot going on. However, there are two key UI components that you should take note of. The first is the search bar that supports advanced search syntax. The other is the "New pull request" button which we will use to create this PR. The large space in the second half of the page will display all open PRs (if any).
The most important step here is selecting the source and target branch (both remote). These are the branches used for merging as detailed in Combining changes of branches. So, the source branch is the compare branch in the UI and the target branch is the base branch.
If you are following along, you can select the dropdown for compare and you should see the following options:
You can select the sample-pr branch that we pushed earlier and you should see the UI updated with the changes made in the branch:
Once you have selected your source and target branches, you can select "Create pull request". Then, you will be prompted to enter some additional details about the PR:
Title: quick summary of the PR
Description: details about the PR such as what the PR contains, what it aims to resolve, etc.
Reviewers: who is going to check and verify these changes
Assignees: who is actively working on these changes (usually the person creating the PR)
Labels: tags for the PR (useful for filtering and searching)
Projects: associates the PR with a specific
Milestone: associates the PR with a
The only mandatory field is the title field. So, for now, we can leave the title as it is or give it a new value. Then, you can create the PR:
There is a new UI component representing the CI/CD actions that may have been run on the repository. We will cover this later on under CI/CD with Github Actions.
Here in the PR preview, you can add/view the comments made.
For now, we can merge the PR by selecting "Merge pull request". Then, confirm the merge and when you navigate back to the main repository page and view the file hello.txt, you would see your changes from the branch added to the main branch.
Wonderful! Now that we have established what PRs are and how they can be created, we can look at how we can employ them for collaborative workflows.
whitegreencoralcornflowerbluelimegreenwhitesmokeTo understand this, we need to understand how colors are represented.
A color on the screen is actually a combination of 3 main colors: red, green and blue. Combining different "strengths" or "intensities" of these colors leads to a different color that is rendered on screen. The lower the intensity, the darker the shade of the color and the less prominent it is.
The intensity is a decimal integer that ranges from 0 to 255 (and no higher or lower). This is then converted into hexadecimal (a number system with 16 digits: 0 to 9, then a-f for 10-15 respectively) which reduces it to two digits.
The 3 intensities are then combined into one 6-digit hexadecimal value, which is used to represent the color. The red intensity comes first, followed by green, then lastly blue.
When assigning hexadecimal code colors as color values, a # symbol is prefixed before the 6-digit hex value. Note that again quotation marks need to be omitted.
Format of color: #rrggbb
If you want pure red, then you max out the red value and minimize the green and blue. This means that the red value is 255 and the green and blue are both 0. Since 255 in decimal is ff in hex, the color code for pure red becomes #ff0000
The same can be done for pure blue and pure green.
In HTML, if all the values (red, green and blue) are maxed out, then you get the color white: #ffffff
On the other hand, removing all color leaves black: #000000
More generally, if the intensities of red, green and blue are equal to each other you end up with a shade of grey. The lower the intensities, the darker the shade.
Here are some more examples:
Yellow is max red, max green, no blue: #ffff00
Orange is max red, little less green, no blue: #ffa500
Violet is medium red, no green and max blue: #7f00ff
Coming back to CSS color values, you can assign the 6-digit hexadecimal value to a property the same way to assign a color by name. For example: color: #7f00ff; (violet, as above).
There are online HTML color pickers, and some IDEs (like Visual Studio Code) provide ways to visualise a color using its hex code while you write CSS, so there is no need to memorize color codes. Additionally, if there is a font color or some such color on a website that you like, you can use the Browser Inspector to select the element and read its styling rules to see what its color code is.
The end of the page will show another way to pick a color from a page using the Eyedropper.
While hex codes provide a lot of flexibilty and control over the precise shade of color that you want to use, they are rather unintuitive. CSS provides the rgb function, which takes in 3 arguments and returns a color value. The arguments are integers from 0 to 255 and are the red, green and blue values (left to right). They are slightly easier to use than the direct hex codes, and are supported by all browsers.
To represent the color violet as above, I would write: color: rgb(127, 0, 255);
Again, there are color pickers online that can be used to find the RGB values for particular colors and color palettes.
This performs like the rgb function but it takes an additional argument. This argument is a decimal number between 0 and 1 inclusive, and indicates the opacity of the color. 1 means the color is fully opaque and 0 means the color is fully transparent.
Usage: color: rgba(127, 0, 255, 0.6);
Firefox has a tool called the Eyedropper that, when active, can pick out the exact html color code of anything present on a webpage. To use this, open up the Browser Inspector and find the dropper icon at the top right corner of the html window (see below).
Alternatively, you can customize the Firefox toolbar to add the Dev Tools menu with the Eyedropper.
Now click on the icon and you'll have the Eyedropper active. It takes control of your cursor and shows you the exact hexadecimal color code of the pixel your cursor is on. If you click while it is active, it copies the color code to your clipboard and deactivates (returning your cursor to normal). To deactivate it without copying the color code, press the esc key.
Here's how it works (note that I have customized my toolbar to have the Eyedropper and Developer tools accessible from there without using the keyboard shortcut):
Later we'll see use the color picker to choose a suitable background color for our html document.
Next, we'll see how measurements in CSS work and what valid measurement values can be assigned to elements.
So we know how to run commands from an interactive prompt, but what if we want to save the commands we run so that we can reuse in the future? That's where scripting comes into play
You can write programs directly at the prompt, or write into a file (writing scripts)
Open an editor (for beginner, nano is recommended), save the script as example-script
On your shell, run chmod +x example-script
You can run your script as ./example-script
Most command line utilities take parameters using flags. They come in short form (-h) and long form (–help). Usually, running COMMAND -h or man COMMAND will give you a list of the flags the program takes.
Short flags can be combined: rm -r -f is equivalent to rm -rf or rm -fr
A double dash – is used in to signify the end of command options, after which only positional parameters are accepted.
For example, to create a file called
There are a few flags that are widely accepted and have similar meanings throughout many programs
-a commonly refers to all files (i.e. also including those that start with a period[^4])
-f usually refers to forcing something, e.g. rm -f
-h displays the help for most commands
The Unix Directory Structure Unix has a different directory structure from Windows.
There is no concept of drives.
Everything is files and directories. The root directory is /
We use forward slash / instead of backward slash \
Specifically for Linux, there is FHS
/bin, /sbin, /usr/bin, /usr/local/bin, /opt = executables
On Linux: /home = user home directories
We've seen this command before, but we've never assigned it the proper terminology. Whenever we type something out, we can split the input into COMMANDs and ARGs (short for arguments)
COMMAND ARG1 ARG2 ARG3
Used to store text
name=value to set variable
$name to access variable
:There are also a bunch of special variables we can use in our scripts:
$?: get exit code of the previous command
$1 to $9: arguments to a script
$0: name of the script itself
On top of variables you can declare, there are a bunch of global variables that are declared in order for your system to run. We call these Environment Variables. You can see the full list of environment variables using the command:
Create a script variable-example containing the code below, then try running it with various arguments.
Loop is used to run a command a bunch of times.
For example:
Let's unpack this!
for x in list; do BODY; done
; terminates a command -- equivalent to newline
Split list, assign each to x, and run BODY
Split by "whitespace" -- we will get into it later
So, knowing the above,
$(seq 1 5)
Run the program seq with arguments 1 and 5
Substitute the $(...)
Let's unpack this!
CONDITION is a command.
If its exit code is 0 (success), then BODY is run.
Optionally, you can also hook in an else or elif
So, knowing the above,
test -d /bin
test is a program that provides various checks and comparison which exits with exit code 0 if the condition is true.
Alternate syntax: [ condition ], e.g. [ -d /bin ]
Let's create a command that only prints directories
Bug! Hold on! What if the directory is called "My Documents"?
for f in $(ls) expands to
for f in My Documents
Will first perform the test on My, then on Documents
Bash splits arguments by whitespace (tab, newline, space)
Same problem somewhere else: test -d $f
If $f contains whitespace, test will error!
bash knows how to look for files using patterns:
Thus, for f in * means all files in this directory
When globbing, each matching file becomes its own argument
However, still need to make sure to quote, e.g.
test -d "$f"
You can make advanced patterns
for f in a*: all files starting with a in the current directory
for f in foo/*.txt: all .txt files in foo
for f in foo/*/p??.txt
if [ $foo = "bar" ]; then: What's the issue?
What if $foo is empty? arguments to [ are = and bar
Possible workaround: [ x$foo = "xbar" ]
The mentioned problems are the most common bugs in shell scripts.
A good tool to check for these kinds of possible bugs in your shell script:
Shell is powerful, in part because of Composability
You can chain multiple programs together, rather than one program that does everything
Remember The Unix Philosophy:
Write programs that do one thing and do it well.
cat /var/log/sys*log | grep "Sep 10" | tail
cat /var/log/sys*log prints the system log
This output is fed into grep Sep 10, which looks for all entries from today.
This output is then further fed into tail, which prints only the last 10 lines.
All programs launched have 3 streams:
STDIN: the program reads input from here
STDOUT: the program prints to here
However, this can be changed!
a | b: makes STDOUT of a the STDIN of b.
a > foo
So why is this useful?
It lets you manipulate output of a program!
ls | grep foo: all files that contain the word foo
ps | grep foo: all processes that contain the word foo
On Linux: journalctl | grep -i intel | tail -n 5: last 5 system log messages with the word intel
(a; b) | tacRun a, then b, and send all their output to tac[^7]
For example: (echo qwe; echo asd; echo zxc) | tac
b <(a)Run a, generate a temporary file name for its output stream, and pass that filename to b
To demonstrate: echo <(echo a) <(echo b)
On Linux: diff <(journalctl -b -1 | head -n20) <(journalctl -b -2 | head -n20)
Used to run longer-term things in the background.
Use the & suffix
It will give back your prompt immediately.
For example: (for i in $(seq 1 100); do echo hi; sleep 1; done) &
Sometimes piping doesn't quite work because the command being piped into does not expect the newline separated format.
For example, file command tells you properties of the file.
Try running ls | file and ls | xargs file
Strings, as mentioned before are any ordered set of characters enclosed in quotation marks.
To access a particular character in a string at a particular position, there are 3 ways:
Square bracket indexes ([])
The charAt method
The at method
Let's create a string, and use the 3 methods to compare outputs with different cases:
Usage: str[i] where str is the string and i is a nonnegative integer
We can see from the above that passing in any value for i that is not a nonnegative integer less than the length of the string results in undefined.
charAt methodUsage: str.charAt(i) where str is the string i is a number
We can see from the above that:
passing in a negative value for i returns the first character in the string
passing in a decimal value for i returns the index at position floor(i)
passing in a value greater than the length for i returns an empty string ""
at methodUsage: str.at(i) where str is the string i is a number
We can see from the above that the at method behaves similarly to the charAt method except for negative indices. In this case, an index of -1 corresponds to the last character in a string, and every negative number after that is counting backwards. Thus an index of -2 is the second-to-last character in the string, -3 is the third-to-last character, and so on.
Strings are provided with a whole host of functions that can operate on strings. These functions are called string methods, and W3Schools has a an of these functions. For now, we'll look at a few in detail, and how they work.
We've already seen 2 string methods that help to get a character in the string at a particular position: charAt and at.
Let's look at some string formatting methods.
To convert a string to all uppercase, we have the toUpperCase method. You can probably guess from this that the method to convert a string to all lowercase is toLowerCase.
Note that the value of name has not changed after applying the string methods to it. This is because strings are immutable, so the methods just create new strings.
To get a subsection of a string, called a substring, there are a few methods.
sliceThe first is the slice method. This takes in two arguments, a start and end, and returns the characters in the string from the start index to just before the end index:
Ommitting the end value will return all the characters from the start to the end of the string:
If the end value is greater than or equal to the start value, an empty string is returned:
If only one argument k is passed in, and the argument is negative, then the last |k| characters are returned:
substrThe next is the substr method. This takes in two arguments, a start and length, and returns the first length characters from the start index inclusive.
Ommitting the length value will return all the characters from the start to the end of the string:
If the start is negative, then the characters are counted backwards:
substringThe last method is the substring method. This works like the slice method but with one change. If end is greater than start, then the two are swapped:
There are a few methods to help check if one string is a substring of another. These are includes, startsWith and endsWith. They are case-sensitive and their names are pretty self-explanatory.
Let's say we want a simple program that will add two numbers and output their sum in the form "x + y = sum".
Passing in multiple arguments console.log causes them to be joined into one string with each argument separated by a space. However this method can only be used when a function is able to take in multiple arguments and knows to concatenate them into a string, like console.log. The same code will not work with alert (see below).
As you hopefully recall, the when applied to two strings (or to a number and a string) performs string concatenation. This results in a single string being passed in to the function to print out. The advantage of this is that string concatenation results in a string value being formed, so it can be passed in to any function, and can be assigned to a variable/constant as needed. However, if I was not careful and omitted the brackets around x + y at the end, I would run into a problem (see below).
This method works similarly to string concatenation, but when substituting multiple values into a string it looks more clean and is easier to read. To form a string template, wrap the string in backticks (``) instead of quotes. Then, at the positions of the variables, use curly braces and the $ sign before the braces, and place the variable/expression between the braces. See below:
While this has the same effect as before, it looks a lot cleaner and its clearer to anyone reading the code as to what is going on. There is also less chance of operations getting confused.
Next, we'll look at arrays and some useful array methods.
Functions can be thought of as blocks of code that perform particular tasks. In JavaScript, like in most other programming languages, functions can take in a number of arguments and return a value.
The syntax to declare a function is as follows:
A function can take any number of arguments, and the arity of a function is the number of arguments it can take
A nullary function takes in no arguments,
A unary function takes in 1 argument,
A binary function takes in 2 arguments,
A ternary function takes in 3 arguments, and so on
A function can return any value, or can return no value at all. In this case, the return type of the function is undefined. An example is the console.log function which logs something to the console and has no return value.
An example of a unary function is below:
This function uses a for loop to calculate the sum of numbers from 1 to its input n. You could instead use the formula:
To call a function on a value, use the name of the function and pass in the argument(s) (if any) in brackets:
Here are some more functions:
Sometimes you may want to create a function that can take in a variable number of arguments. These functions are called variadic functions. The syntax to declare a variadic function is as follows:
In the function body, args can be treated as an array with a length property and indexing.
Python has a sum function that can take in any number of numbers and returns their sum. Let's recreate this in JavaScript:
To use a variadic function, simply call the function and pass in any number of arguments:
Let's go back to our sumToN function:
Note that we only have one line in the function body, a return statement that performs a relatively simple math operation. Yet just for this one line return statement we need 2 extra lines to declare the function. Fortunately, JavaScript provides a way to shorten such one-line functions using lambda functions. The syntax to declare a lambda function is as follows:
This is equivalent to:
Since the let keyword creates a variable, this means that lambdaFunction can be reassigned to a different value. This is bad - what if we accidentally reassign our function to a different value? Instead, we use the const keyword, which declares a constant. As the name suggests, constants cannot have their values reassigned once assigned.
Now with all this in mind, let's rewrite out sumToN function:
Calling a lambda function is same as calling a regular function:
Lambda expressions can have any arity (i.e. can take any number of arguments) and can be used to write functions that span multiple lines by using curly braces {}:
We have now rewritten the sum function from before as a lambda expression. Note that when writing multiline lambda expression, the return keyword needs to be present.
Lambda expressions can also be declared as below:
This is more useful for complex anonymous functions with long bodies.
Higher order functions are functions that take in function(s) as argument(s) and/or return a function. A simple example of a higher order function is a mapper function:
Here, the map function takes in two arguments: a function func and a value val. It then applies the function func to the value val and returns the result.
Let's try using this function:
Lambda functions are quite useful when it comes to higher-order functions because they allow to shorten expressions. Let's rewrite the map function example above with lambda functions:
If there is a particular function that is only ever going to be called once during the program execution, then there is no need to assign the function a name. These are called anonymous functions and can use lambda function notation.
Rewriting the above example using an anonymous function:
Next, we'll take a look at strings in JavaScript and some string methods.
Servers are machines whose purpose is to provide a service or content over a network. They are typically administered remotely and only connect physically to power and a network. They "serve" content or services using software daemons. Their natural habitat is the datacenter, where they live in racks to survive off electricity and network data. While they are not able to reproduce, they have no natural predators, so their population is stable. Some breeds of server can be found in network/data closets where they live in a business. Fewer are still kept in captivity in private homes. Virtual servers are servers that are run under an emulator or hypervisor to provide a server-like environment using a software envelope which may be augmented with hardware support.
For small projects and little experiments, there are some no cost options you can try!
A virtual private server, also known as a VPS, acts as an isolated, virtual environment on a physical server, which is owned and operated by a cloud or web hosting provider.
You can use your student email get free credits for popular VPS providers, or if you want to go cheap, lesser known VPS, there’s always
You can use anything as your server!
Old Laptops
Cheap NAS
Single Board Computers (Raspberry Pi)
We'll assume you have somehow gotten a server to work with, or if you're following the workshop, you should have something to work with already! The following guide will assuming your server is running Ubuntu, an operating system commonly found in servers.
Get a username and password to your server, and SSH with a client:
Terminal on MacOS
Windows Terminal or PuTTy on Windows
Any shell on Linux
You will be prompted for a password.
This is important if you are logged into your server as root. As root is a common username, there will be people enumerating through common usernames on every possible IP address just to try their luck and compromise servers.
sudo nano /etc/ssh/sshd_config
Make sure that PermitRootLogin is set to something like prohibit-password
(Optional) Disable password logins, if you're very sure you can take care of your keypair: PasswordAuthentication no. Otherwise, maybe just leave them on for now especially if you have no way of recovering
Keys are a secure way to log into remote computers without using passwords. Here's a simple explanation:
SSH keys come in pairs: a public key and a private key
The public key is like a padlock that you put on the remote server
The private key is like the key to that padlock, which you keep on your local computer
When you try to log in, your computer uses the private key to prove it can "unlock" the padlock
This method is more secure than passwords because:
The private key never leaves your computer
It's extremely difficult for someone to guess or crack your key
Even if someone intercepts your login attempt, they can't see your private key
By using SSH keys, you can log in quickly and securely without typing a password each time.
While some service providers have a webshell, it’s much nice to be able to work in your own terminal (and significantly less laggy).
ssh-keygen -t ed25519
After that, take the pubkey string, then:
su user
mkdir .ssh
Now, you should be able to get a shell in your server, without any passwords!
While this is out of the scope of this workshop, setting up firewalls is another thing you should always do when setting up a server.
So, now we’ve got a server up and a way to access it. But you notice an IP address is kind of ugly and hard to remember… that’s where domain names come into play! If you’ve ever typed a website URL, you’ve effectively typed a domain name.
Well, how do IP Addresses turn into domain names? All you need to know is there are a lot of servers out there maintaining a large table of IP addresses to domain name mappings. These are known as DNS servers.
For purposes of this workshop, if you have the Github Student Developer Pack, you should be able to get a domain name from .tech for free for a year.
If you want to get a cool domain name, you can use this to compare prices from different registrars:
Every domain will have the following structure
For every domain, you can also have a bunch of records for subdomains
We’ll start by creating a bunch of A records:
Leaving the hostname blank will just lead to the main domain
Add * as the hostname will route all empty subdomains to a single address
Time to Live (TTL) is a field on DNS records that controls how long each record is valid and — as a result — how long it takes for record updates to reach your end users.
The industry standard is to use Cloudflare DNS, they have some great features such as Proxying.
Now, you have a cool domain for your server!
Fetch requests are a way to send HTTP requests to a server directly from a page using JavaScript. The advantage of using fetch requests over, say, reloading the page are that
The contents of the page do not need to be reloaded each time the request is made
The request takes a little less time to get processed compared to sending server-to-server requests from the backend.
Allows content to be loaded "optionally" - content is not loaded on the page until it is specifically requested by the user, so pages have less content and are rendered faster.
Fetch requests follow this syntax:
resource is the server (or resource) that the request is being sent to. It is a string with the URL.
options contain some custom settings for the request. More details later
They return a - in other words, they return an asynchronous wrapper for the response from the server.
This promise can be processed using the .then method, which takes in a unary function. This function is applied on the response recieved from the server.
The response from a fetch request is a JSON object with parameters like status and protocol and body. Of these, the status and body parameters are the most useful for now.
response.status is an integer representing the HTTP Response Code returned by the server. The full list of response code can be found , but a few common ones are below:
200 OK - all good
404 NOT FOUND - the url/resource was not found on the server side
500 Internal Server Error - the server ran into an error when trying to process the request
response.body is an encoded object that contains the body of the response. Note that failed requests have null as the value for the body property for their response objects. To decode the body into readable JSON that can be processed by your JS code, use the method response.json().
You could imagine using the status property in this way:
This status-checking and body-retrieval is often chained into 2 different promises using two sequential thens:
Let's walk through it step by step:
fetch("somedomain.com/some-route/", someData) - this line performs the fetch request and returns the response wrapped in a promise
.then(response => response.status === 200 ? response.json() : response.status) - here, we check the response status code to see if the request was successful.
If the response was successful (i.e. response.status === 200) then we return the body of the response using response.json()
Note that you may want to add more intermediary conditions and process different response codes differently, but the general form of a fetch request is as above.
Note that the fetch function takes in two parameters: the resource, and the options.
options is a JSON object that contains any custom settings that you may want to apply to the request. This includes header data, type of content being sent (if any), the content itself, and others. If any settings are ommitted, then the browser plugs in the default values for each setting. Below is an example of the options object, filled with default values for each setting (not all possible settings are shown).
Let's look at each setting:
method - this setting specifies the request method. There are 4 commonly used methods(GET, POST, PUT and DELETE), and each are used in different situations. More details
mode - what resource sharing mode should be allowed. "cors" stands for "Cross-Origin Resource Sharing", and this mode allows sharing resources across origins. The other options are "no-cors"
There are many more headers that can be assigned, and they are all listed .
body - this setting specifies the body of the request (i.e. some data that needs to be processed by the resource server). Note that this must be a string; JSON.stringify allows us to represent a JSON object with the request body as a string. Note also that a body CANNOT be present for a GET request.
The are a very very useful resource for understanding HTTP requests and responses, as well as frontend web development with JavaScript.
Next, we'll write our own fetch request to the NUSMods API.
Tag attributes are essentially properties that can be assigned, or attributed to a tag. The way to assign attributes is as follows:
Let's enhance our previous index.html file with some more elements and attributes.
Going back to our index.html file, let's add a 4 divs below the paragraph tag. Copy the following code into the body of your HTML code: (or write your own divs if you want):
Below these divs, add a line break element,
In CSS, you may want to define measurements for elements such as font size, height, width, curvature, thickness etc. There are multiple ways to define measurements, so let's look at a few.
CSS measurement values are written as a number followed by the unit (with no spaces in-between). Example: 20px.
Negative numbers are ignored (except for padding and margin properties) and are considered 0. Decimal values are accepted.
element.removeEventListener(event, listener); // to remove a specific listener
element.removeEventListener(event); // to remove all listeners for an event<script>
// code goes here
</script><script src="script.js"></script>document.addEventListener(event, listener); // add a listener to the page itself
element.addEventListner(event, listener); // add a listener to a particular elementlet button = document.getElementById("hello");
button.addEventListener("click", () => alert("Hello!"));<script src="script.js"></script><!DOCTYPE html>
<html>
<head>
<link rel="stylesheet" href="../styles.css">
<script src="script.js"></script>
<title>My web page</title>
</head>
<body>
<h1>Hello world</h1>
<button id="hello">Say hello</button>
</body>
</html><!DOCTYPE html>
<html>
<head>
<link rel="stylesheet" href="../styles.css">
<title>My web page</title>
</head>
<body>
<h1>Hello world</h1>
<button id="hello">Say hello</button>
<script src="script.js"></script>
</body>
</html>document.addEventListener("DOMContentLoaded", function() {
// add event listeners to elements on the page
});// In the script.js file:
document.addEventListener("DOMContentLoaded", function() {
let button = document.getElementById("hello");
button.addEventListener("click", () => alert("Hello!"));
});
/*
Note that function() {...} is an alternate way of
creating anonymous functions spanning multiple lines,
as explained much earlier in the guide
*/git checkout -b sample-pr
vim hello.txt
git add hello.txt
git commit -m "new changes"
git push origin sample-pron:
issues:
types: [opened]
jobs:
comment:
runs-on: ubuntu-latest
steps:
- uses: actions/github-script@v7
with:
script: |
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: '👋 Thanks for reporting!'
})on:
issues:
types: [opened]
jobs:
apply-label:
runs-on: ubuntu-latest
steps:
- uses: actions/github-script@v7
with:
github-token: ${{ secrets.MY_PAT }}
script: |
github.rest.issues.addLabels({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
labels: ['Triage']
})on:
workflow_dispatch:
schedule:
- cron: "0 12 * * *"
jobs:
poll:
runs-on: ubuntu-latest
steps:
- name: Fetch all
uses: actions/github-script@v7
with:
github-token: ${{ secrets.ORG_PAT }}
script: |
const repos = await github.paginate(github.rest.search.repos, {
q: "<query string>",
})
let newReadme = `
<updating the README>
`
const existingReadme = await github.rest.repos.getContent({
owner: context.repo.owner,
repo: "<repo>",
path: "README.md",
})
await github.rest.repos.createOrUpdateFileContents({
owner: context.repo.owner,
repo: "<repo>",
path: "README.md",
message: "Update README",
content: btoa(newReadme),
sha: existingReadme.data.sha,
})let str = "hello world";
let spaceship = "artemis";
let strongPassword = "c<Db39)(-2?^#_=11MD.{[a"; // not my password
let emoji = "🦾🦵"; // yes this is valid, try itfunction functionName(arg1, arg2, ...) {
// do something
return something; // optional
}














sum. Note how the console is able to follow along as I type the function call and shows me the return value before I execute the line.
let name = "dino";name[0]; // returns "d"
name[3]; // returns "o"
// Testing with other indices:
name[]; // no input - Uncaught SyntaxError: expected expression, got ']'
name[-2]; // negative number - returns undefined
name[2.7]; // decimal value - returns undefined
name[32]; // index out of range - returns undefinedname.charAt(0); // returns "d"
name.charAt(3); // returns "o"
// Testing with other indices:
name.charAt(); // no input - returns "d"
name.charAt(-2); // negative number - returns "d"
name.charAt(2.7); // decimal value - returns "n"
name.charAt(32); // index out of range - returns ""name.at(0); // returns "d"
name.at(3); // returns "o"
// Testing with other indices:
name.at(); // no input - returns "d"
name.at(-2); // negative number - returns "n"
name.at(2.7); // decimal value - returns "n"
name.at(32); // index out of range - returns ""let name = "Dino";
let nameUpper = name.toUpperCase();
let nameLower = name.toLowerCase();
nameUpper; // "DINO"
nameLower; // "dino"
name; // "Dino"let name = "rocket sloth";name.slice(0, 3); // "roc"
name.slice(4, 10); // "et slo"name.slice(3); // "ket sloth"name.slice(10, 3); // ""name.slice(-5); // "sloth"name.substr(0, 3); // "roc"
name.substr(4, 10); // "et sloth"name.substr(3); // "ket sloth"name.substr(-3, 5); // "t slo"name.slice(4, 10); // "et slo"
name.substring(4, 10); // "et slo"
name.slice(10, 4); // ""
name.substring(10, 4); // "et slo"let name = "Rocket Sloth";
name.startsWith("Rock"); // true
name.startsWith("rock"); // false
name.endsWith("th"); // true
name.includes(" "); // true
name.includes(""); // true
name.includes("hijse"); // falselet x = 10;
let y = 5;console.log(x, "+", y, "=", (x + y));console.log(x + " + " + y + " = " + (x + y));console.log(`${x} + ${y} = ${x + y}`);function sumToN(n) {
let sum = 0;
for (let i = 1; i <= n; i++) {
sum += i;
}
return sum;
}function sumToN(n) {
return n * (n + 1) / 2;
}sumToN(10); // returns 55function nullary() {
console.log("hello");
return 2;
}
function binary(x, y) {
let k = x ** y;
let m = k % y;
return m + y;
}
function ternary(x, y, z) {
let a = x + y + z;
let b = x - y + z;
return a / b;
}
function noReturnUnary(x) {
console.log(x);
}function variadic(...args) {
// do something
}function sum(...nums) {
let total = 0;
for (let i = 0; i < nums.length; i++) {
total += nums[i];
}
return total;
}sum(3, 43, -4, 7.6, 9); // returns 58.6function sumToN(n) {
return n * (n + 1) / 2;
}let lambdaFunction = (arg1, arg2, ...) => something;function lambdaFunction(arg1, arg2, ...) {
return something;
}const sumToN = n => n * (n + 1) / 2;sumToN(4); // returns 10
sumToN(10); // returns 55
sumToN(100); // returns 50500const sum = (...nums) => {
let total = 0;
for (let i = 0; i < nums.length; i++) {
total += nums[i];
}
return total;
}// the sum function from earlier
const sum = function(...nums) {
let total = 0;
for (let i = 0; i < nums.length; i++) {
total += nums[i];
}
return total;
};
// the sum to n function from earlier
const sumToN = function(n) {
return n * (n + 1) / 2;
};function map(func, val) {
return func(val);
}// this function adds 5 to the input
function add5(x) {
return x + 5
}
map(add5, 4); // returns 9const map = (func, val) => func(val);
const add5 = x => x + 5;
map(add5, 4); // returns 9// assume we have map, but add5 is only going to be used once
map(x => x + 5, 4); // returns 9#!/bin/sh is also known as the shebang, specifies the interpreter
echo is a command that prints its arguments to the standard output.
-vtouch -- -vtouch -vFor example, to grep a file called -v, grep pattern -- -v will work while grep pattern -v will not.
-v usually enables a verbose output
-V usually prints the version of the command
/Users = user home directories/var/log = log files
/tmp = temporary files
/dev/urandom = random number generator
$#: number of arguments
$$: process ID of current shell
Compared to C, no curly braces, instead do and done
Equivalent to
echo hello
Everything in a shell script is a command
Here, it means run the echo command, with argument hello.
All commands are searched in $PATH (colon-separated)
Find out where a command is located by running which COMMAND, e.g. which ls
for f in "My Documents"How do we fix our script?
What do you think for f in "$(ls)" does?
fooInstead, use [[ CONDITION ]]: bash built-in comparator that has special parsing
Good news: it also allows && instead of -a, || instead of -o, etc.
Write programs to work together.
Write programs to handle text streams, because that is a universal interface.
STDERR: a second output that the program can choose to use.By default, STDIN is your keyboard, STDOUT and STDERR are both your terminal
STDOUTafooa 2> foo: STDERR of a goes to the file foo
a < foo: STDIN of a is read from the file foo
a <<< some text: STDIN of a is read from what comes after <<<
You can also pipe to tee (look up in man what tee does)
Note that this forms the basis for data-wrangling, which will be covered later.
This shows the difference between the first 20 lines of the last boot log and the one before that.
STDOUT. Instead, can redirect STDOUT to file.Handy especially to run 2 programs at the same time like a server and client: server & client
For example: nc -l 1234 & nc localhost 1234 <<< test
jobs: see all jobs
fg %JOBS: bring the job corresponding to the id to the foreground (with no argument, bring the latest job to foreground)
You can also background the current program: ^Z, then run bg
^Z stops the current process and makes it a job.
bg runs the last job in the background.
$! is the PID of the last background process.
What is xargs doing?
Second-hand mini-pcs
Then, systemctl restart ssh
If successful, the server lets you in without asking for a password
nano .ssh/authorized_keys
paste in pubkey string, save
chmod -R go-rwx .ssh
If the response was not successful, we return the status code of the response so it can be shown to the user
.then(data => {...}) - this section of the code takes the data returned after processing the response
If the data returned is a number (i.e. typeof(data) === "number") then this means that the response status code was returned by the previous function, so there was an error, so we tell the user as such
Otherwise process the data as normal
"same-origin"cache - how the request interacts with the browser's cache. Some different options are "no-cache", "no-store", "reload", and "only-if-cached". These options are explained here
credentials - this specifies whether or not the user should send/receive cookies from the resource. The two other options are "omit" (never send/receive cookies) and "include" (always send/receive cookies). More details here
redirect - this specifies how to handle if the resource redirects our request elsewhere. Some other options are "error" and "manual". More details here
headers - this is a JSON object that contains the headers for the request. The sample shows two header settings:
"Content-Type" - this header specifies the type of content being requested. It is in the format "type/subtype" and some examples are "image/png" (a png image), "text/plain" (plain text), or "multipart/form-data" (multiple parts of form data). The full list of options can be found here.
"Access-Control-Allow-Origin" - this header indicates whether the response can be shared with requesting code from the given origin. This could be "*", so all origins can have access to the response, or "<origin>" where <origin> is a domain/IP address. More details
<br>You'll notice how two of the divs above are quite long, while the other two are quite short. Hmmmm.
When writing HTML code there may be certain elements that you may want to format/style similarly because they are intended to have a similar appearance and behaviour. Here, say you want to group the longer divs with each other, and the shorter ones with each other.
This is where the class attribute comes in. Elements are assigned a class based on what you, as the developer, want the element(s) to look/behave like. Elements that look/behave similarly can be given the same class.
Here, assign the longer divs the class "long-div", and the class "short-div" (or any class name of your choice, as long as there are no spaces). It should look like this:
Now, once we add CSS to the page we can use these classes. Reloading the page won't change anything yet though.
Recall that hyperlinks are created using the a tag. Let's add two links to our page: one for the orbital webpage and one for the NUS Hackers webpage. Inside the body, below the divs and the line break, add the following code (note that the anchor tags are nested within paragraph tags):
Lets add another line break using <br> below these links, and a horizontal line using the <hr> unpaired tag.
Now, when you go back and reload the page, you should see the above text rendered in two separate lines. But you'll realise that you haven't really added any hyperlink yet because clicking on the text takes us nowhere. This is where the href attribute comes in.
"href" stands for "HyperText Reference" and is used to link to images, links or files on the web (or on your local storage). It can be assigned to the <a> tag (and a few others) to give it a hyperlink. You can give a link to a page as a value to the href attribute to create the hyperlink:
Now if you go back and reload the page, you'll see the links, and you'll see that clicking on them sends you to the respective pages (see below).
Now let's add a form to our page. This form will, eventually allow us to conduct a Google Search query. Below the links we added before, add the following code:
If you reload the page and try to enter something and submit the form, nothing will happen. This is because we haven't defined any attributes for the form yet. We'll do that in the next section, but for now lets add a couple of attributes to the other elements.
You can imagine having a form where some or all of the fields are necessary (i.e. you don't want the form to get submitted if they are empty). How do you control this? Simple: add the attribute required to the input field.
In the html file, add the required attribute to the <input> tag:
The required attribute takes in either "true" or "false" (as strings), but such attributes (called boolean attributes) can instead be omitted to set their value to "false", or place without assignment to set their value to "true".
Another example of a boolean attribute is the hidden attribute that can be added to any content-defining tag (paragraphs, buttons, forms, input fields, divs, tables etc.) to specify whether it should be hidden or not (big surprise there).
Generally, input fields take in only text. But what if you want to specify the kind of input it takes? That's where the type attribute comes into play. The <input> tag can take on a type attribute to define what type it is. The values could be one of the following:
"text" - this is the default value, no need to define a type attribute in this case
"email" - this checks for a valid email format ([email protected])
"password" - masks the input so the user cannot see what they are typing (now you know how to get those black dots in a password field)
"number" - only allows numbers
"checkbox" - makes the input a checkbox that can be checked/unchecked. Multiple checkboxes can be checked at once.
"radio" - makes the input a radio option (like checkboxes, but only one radio option can be selected at a time, and cannot be unselected once clicked)
"file" - allows the user to upload a file
In our case, we do not need a type attribute for the <input> tag since it is already of type "text". But the type attribute can assigned to the button instead, with the value "submit". This means that, when the button is clicked, the form gets submitted. Go ahead and add the type attribute to the button:
The placeholder attribute can be added to the <input> tag to define some placeholder text. Let's add some:
Now if you reload the page, you'll see "query" in greyed out text inside the input field. This placeholder will disppear immediately once you start typing. If you try to submit the form without any input, you will get a small message from the browser requesting you to fill in the input field, thanks to the required attribute.
Below the form, add a line break and a button below the break. This is done using the paired <button> tag:
Now this button is unique: unlike the previous one, which submits the form, this one does nothing. It's useless. Hence, it is unique on our page. We can use the id attribute to denote this uniqueness:
Like the class attribute, the id attribute has no visible effect on the page. However, it denotes that the element is unique on the page, and hence you can later add special, unique behaviour and styling to the element.
Finally, let's add an image to our page. Since we're learning HTML, why not an HTML logo? Visit this link and download the file. Then move the file to the same folder as your index.html file. Feel free to get your own HTML logo from elsewhere, but this is the same logo used in the sample code for this guide, and its background color becomes relevant later.
The image should be present alongside index.html in the same folder, and should have the name html-logo.png.
Now let's add it to our page. Use the <img> tag for this, along with the src attribute (src stands for source):
You should now see the image on the page once you reload it. If you don't, check for spelling mistakes ("scr" instead of "src", or "hmtl-logo", etc.), or it may be in a different folder than the index.html file.
In case the image doesn't load, you would see a small icon in place of the image. Instead of this icon, you can display some text using the alt attribute:
This means that if the image doesn't load for whatever reason, the text "html logo" will be displayed instead. Try this out by purposely mispelling the filename and reloading the page.
Some webpages have feature called mouseover text, which is small text that appears when a user hovers their cursor above an element on the page for a few seconds. This is defined using the title attribute, and can be used to either give users a little more information about the element, or in a more creative way.
Add some mouseover text using the title attribute to the picture:
Now you'll see "html logo" when you hover your cursor over the image:
At the end of all this, your page should render like this in the browser:
The body of your html file should look something like this: (final code available here):
Next, we'll add some functionality to the form to allow it to submit queries to Google Search.
The measurement units have absolute values, meaning they do not change no matter the system, browser or screen size. There are a few that we use in our daily lives:
mm (millimeters)
cm (centimeters)
in (inches)
They can be converted between as follows:
1cm = 1000mm
1in = 2.54cm
There are a few more absolute measurements:
px (pixels) - width of a pixel on the screen
pt (points) - often used for font size; similar to font size values on MS Word
pc (picas) - a typographic unit, similar to pt
They can be converted to other units as follows:
1in = 96px
1in = 72pt
1pc = 12pt
Note that pt and pc, while supported, are not very commonly used as a unit of measurement in CSS. px is the most commonly used, followed by mm and then cm and in to a much lesser extent.
The nature of being absolute measurements means that regardless of the computer specs, screen size, browser width/type, resolution, size of other elements, etc. the values correspond to constant sizes. This means that 100px measures the same physical distance whether its on Safari or Chrome, whether its on an old iPhone 3 or an 80 inch 4K ultra TV screen, whether the browser is max size or resized to half.
Far more common (and in some ways better) than absolute measurements are the relative measurement units. These units are relative to some or the other size, and they will differ based on these dimensions. Let's take a look a few of them:
% - this is relative to the size of the parent element as a percentage (i.e. 45% means 45% the width of the parent element)
em - this is relative to the font size of the current element; 2em means 2 times the font size
rem - this is relative to the font size of the root element; 2rem means 3 times the font size of the root
vw - this is relative to the width of the viewport; 1vw means 1% the width of the viewport[
vh - this is relative to the height of the viewport; 1vh means 1% the height of the viewport
The following two figures demonstrate the difference between absolute and relative measurements. Note how the element size changes as the browser window is resized. Note that
The screen is 1440px wide
The browser window initially takes up the entire screen
50vw is 50% the width of the browser, or 1440/2 = 720px. This means the two orange boxes initially have the same width.
We'll look at the box model for elements to introduce padding and margin, before we finally begin adding CSS to the page.
The final step of our Target workflow is automatically deploying our application to Github Pages when it is merged into main. This is the CD of CI/CD!
However, as Github Actions does not express "merging into main" as an event, we will instead be thinking in terms of deploying when changes are pushed to main instead.
We first go through the same questions as :
"When will it run?" — when changes are pushed to main
"What will it do?" — compile and build the React project and publish the generated build files to Github Pages
"Is this going to be a separate workflow? A separate job in the same workflow? Or just another step in the existing job?" — this will be a separate workflow because (a) the events that trigger it are different from ci.yml, and (b) it is not logically a part of the ci workflow
If we look at our answer for (2), you may notice that we are essentially describing two separate tasks:
Compiling and building the React project
Publishing the generated build files to Github Pages
While we can represent them as a single job, we would like to explore what it's like to design jobs that are dependent on one another and passing around artifacts in Github Actions.
We also realize that (3) reveals that we are no longer treating ci.yml as the full CI/CD pipeline, so you are free to rename the workflow!
What a mouthful! That's quite a lot of new steps and concepts. Fret not, we will be explaining each step individually.
Recall in previously, we mentioned that the events may be dictionaries instead when there may be more properties/conditions. In our scenario, we only want the workflow to deploy to Github Pages when we push to main. So, we can express this using the push event, and specify that it should only run when one of the branches (which includes main) is pushed to.
Similar to the jobs written in and , we will declare a job to
Fetch the repository
Setup Node.js
Install project dependencies
Build the production distribution
Doing so, we should now have a dist/ folder in our virtual machine runner for the build job.
Let's talk more about the final step of the build job.
Artifacts are files or collections of files produced during a workflow run. These artifacts are stored on Github. You may wish to use artifacts for things like storing build logs, test results, binary or compressed files, etc.
They are also a way to share data between jobs in a workflow. Recall in where we mentioned that steps in the same job share the same filesystem as they belong to the same job's virtual machine runner, but steps in different jobs do not share the same filesystem as they have completely separate filesystems, belonging to separate virtual machine runners. Artifacts are the way to bridge this gap.
For our use case, we want to publish the generated dist/ folder from the previous step in build as an artifact, so that our next job, deploy has access to the files and can publish them accordingly.
Thankfully, there is an existing action actions/upload-pages-artifact@v3 that handles this process, as we specify the name of the artifact generated github-pages and the path to the directory containing the static assets, i.e. dist/.
So, after build runs, we would have an artifact called github-pages uploaded to Github and accessible to subsequent jobs. You can read more about artifacts on the .
We then declare a new job, deploy that needs the build job. This is how we construct the dependency graph between jobs, requiring one to complete before the other can execute.
In order to ensure that we can successfully publish to Github Pages, we also need to modify the default permissions of GITHUB_TOKEN.
GITHUB_TOKEN is a that is automatically created as a secret in all workflows. It has access to the current repository, and it expires after the workflow completes. are a way of storing sensitive information in an organization, repository, or repository environment.
Essentially, GITHUB_TOKEN allows steps in the job to have some access to the current repository. So, in order for the job to publish to Github Actions, we want to grant the token write access to both pages and id-token . More information about the various permissions of GITHUB_TOKEN can be found .
An environment in Github refers to a general deployment target that can be configured with protection rules and secrets. Essentially, they allow you to handle different stages of your project, like development, production, and in our case, github-pages.
These environments are displayed on the repository page.
For our scenario, we want to set the url of the environment to point to the output (page_url) of one of the job steps with step id deployment.
Finally, we can start deploying the artifact we published earlier. We use the action actions/deploy-pages@v4 , targeting the artifact named github-pages, which we named earlier.
Notice that we also give an additional id to the step, deployment. This allows the step to be accessible via ${{ steps.deployment }} and allows the outputs of the action to be accessible to the environment (seen above). ${{ ... }} is a way of declaring expressions in a workflow file. More information about expressions in Github Actions can be found .
We now have a workflow with sequential jobs, with build generating the production build as an artifact, and deploy consuming that artifact and publishing it to Github Pages.
We can also visualize the process of uploading an artifact as such:
As per usual, add cd.yml, create a commit, and push to main.
This should already trigger the workflow to run. However, if you navigate to Actions and select the "Deploying the Github Pages" workflow, you will notice that it fails:
If you select the specific workflow run, go to the deploy job, and select the "Publishing production artifact" step, you will see the following error:
The last message tells us what went wrong:
So, visit the URL provided (it is different from the one above!) and select "Github Actions" instead:
Finally, to re-run the workflow, go back to Actions and return to the failed workflow run. At the top right corner, you should see a button to "Re-run worflows", choose to re-run all jobs:
This time, you should see the following:
You can select the URL in the deploy job component and you should be greeted with the following UI:
🎊 Congratulations! You have successfully setup a traditional CI/CD pipeline using Github Actions! Play around with the workflows we have created!
Next up: we will be exploring some unique workflows in Github Actions!
I'm sure you know what NUSMods is, its a website that has details about every course offered at NUS, as well as a degree planner, a timetable builder and a map of the campus. It also has its own API (documentation here) which we are going to use.
The task is to build a simple page that will have a form. The form will allow a user to type in a course code, and once submitted the form will submit a fetch request to the NUSMods API, request the course data, and then display it to the user. If the course code they enter is invalid, then show them an error message.
It is recommended to try the first steps on your own, until you get to the fetch request, to practice writing HTML and JS.
The API we will use is the one to get the course info given a course code and year. The general URL format is below:
Example:
Copy paste the above URL into your browser address bar to see the response. You'll see the response body in JSON format, like below.
Note: The course code needs to be in all capital letters for the request to succeed.
For this exercise, we'll stick to the year 2023-2024.
So the first step is to ready the form. It will have the following:
A text input field that must have a value before the form is submitted
A submit button
Both of these, along with the form, must be uniquely identifiable as well.
Here's how that will look:
Note that we do not define an action or method attribute because we do not want the form to actually submit, we just want it to trigger a fetch request.
Once the request completes, we'll need to put the course details somewhere, so it's a good idea to have a container element ready to accomodate the data. You could also directly place the details in the body, but a container helps to structure the page better. The container should not initially be visible to the user.
(Optional) You may also want to have a separate element ready to show an error message to the user, unless you intend to display an alert instead.
Now we need to add a listener to the form that will wait till it is submitted. Then it should trigger a fetch request.
event is a deprecated global variable that refers to the event in question. It is better to use form validation functions to prevent forms from submitting. In this case I am using event because it is easier, and form validation is out of the scope of this guide.
In our getCourseData function from above (or whatever you decided to name the function), we need to ready the course code. This is nothing much, just get the input field value and make it all caps:
Note that the value attribute of an input field returns the value of the field, which in this case is whatever the user has typed into the text field.
Now comes the fetch request. Recall that to get course details based on course code, we submit a request to this URL:
So the resource parameter for the fetch request will be the above URL with the user's input plugged in. As for the options parameter, luckily we do not need to specify any because the default options suffice!
So the fetch request looks like this:
Next, we need to check the response status. The API will return 200 if the course code was valid, 404 if it wasn't, and other codes indicate other unforeseen errors. You can decide how you want to process it, but in this case since we don't care what the response code is, we can return null if there is an error.
Lastly, we need to process the data and present it to the user. For starters, try displaying the course code, course title, description, and how many units it is.
Here's the basic skeleton:
That's the fetch request completed, now in the body of then we need to display the course details
There are 3 main ways to do this, and it's up to you to choose which one. The implementation is left to you as an exercise.
This approach involves creating elements and adding the text inside of them, then appending them inside the course-container element we made earlier. Make use of the document.createElement method, the element.appendChild method and (optionally) the document.createTextNode method.
The advantage of this is that it is quite flexible to changes in specification. For instance, if I decide to also show the prerequisites and corequisites of every course, its easier to just edit the JS file to create a couple new elements.
The disadvantages of this approach is that every time the fetch request is run, the elements are re-created and re-added to the page.
For this approach, you'll need to edit the html to add elements for the course details to be contained within, and give them all ids. An example is below:
You'll then need to query for these elements and edit their innerText properties to contain the data needed.
The advantage of this approach is that the elements are only created once, and you just need to change their values.
The disadvantage of this approach is that the code is not that flexible to changes in specification, since you need to edit both the HTML AND the JS script to implement any changes.
This approach is the quickest way to do it. Remember the innerHTML property? It holds everything inside the element in question, including nested elements. This means that it is possible to add elements inside of another one by way of the innerHTML attribute:
The above code will insert an <h1> into the div that was selected.
This means we can just format the entire contents of the course-container div into a string, and set the element's innerHTML to that string.
The advantage of this approach is that it is fast and easy to write, and it is easy to implement changes to the specification.
The disadvantage is that this approach allows for something called . This is when malicious code can be injected into a webpage, and one of the ways to do this is to use the innerHTML property. Look at the code below:
This adds a <script> tag inside the div, which causes the code inside it to be executed. Here, the code is just showing a simple alert, but it is possible to write code which behaves much more maliciously, such as creating and submitting invisible forms, reading cookies set by the page, messing with the page content, or worse.
This method, due to the vulnerability it creates, is not recommended for use at all, except in cases where the developer has complete control over the content being added to the innerHTML or it is extremely certain that the content being read in is safe, and when the contents are not very long.
The code for this exercise can be found . Approach 1 has been used because we have not yet done a real example of creating+adding elements. Approach 2 has been left as an exercise, and approach 3 has been demonstrated just to show how it works.
This is the end of the JavaScript guide! If you would like to practice more fetch requests, I suggest testing out some of the API endpoints of . There is also an example that shows a couple of the endpoints, and a two fetch request examples (one of them using a PUT request, so take a look at the options JSON for that request).
The next guide will be on React, a framework of JavaScript that allows to combine HTML, CSS and JS into a single abstraction to make frontend development slightly easier.
for i in 1 2 3 4 5; do echo hello; done#!/bin/sh
echo somethingecho Helloecho location
name=COM3
echo $nameenv#!/bin/sh
echo $0
echo $1
echo $2
echo $#for i in $(seq 1 5); do echo hello; done`for x in list; do BODY; done`for i in $(seq 1 5); do echo hello; doneif test -d /bin; then echo true; else echo false; fi;if CONDITION; then BODY; fiif test -d /bin; then echo true; else echo false; fi;ssh <username>:<ip-address>useradd -m -d /home/<username> -s /bin/bash <username> # Add user
usermod -a -G sudo,adm <username> # Give permissions
sudo passwd <username> # To create a password for the user<domain name>:<tld>
nushackers.orgwww.nushackers.org (www subdomain)
school.nushackers.org (school subdomain)fetch(resource, options)fetch(resource, options)
.then(processResponse);fetch("somedomain.com/some-route/", someData)
.then(response => {
if (response.status === 200) {
let data = response.json(); // get the data
// do something with the data
} else {
// show the user some error message
alert(`Error: ${response.status}`);
}
})fetch("somedomain.com/some-route/", someData)
.then(response => response.status === 200 ? response.json() : response.status)
.then(data => {
if (typeof(data) === "number") {
// show the user some error message
alert(`Error: ${response.status}`);
} else {
// process the data
}
})let options = {
method: "GET",
mode: "cors",
cache: "default",
credentials: "same-origin",
redirect: "follow",
headers: {
"Content-Type": "application/json",
"Access-Control-Allow-Origin": "*"
},
body: JSON.stringify(someJSONData)
}<pairedTag attribute1="value1" attribute2="value2" ...>some content</pairedTag>
<unpairedTag attribute1="value1" attribute2="value2" ...><div>This is a div, a generic html element</div>
<div>This is also a generic html div element</div>
<div>Also a div</div>
<div>Still a div</div><div class="long-div">This is a div, a generic html element</div>
<div class="long-div">This is also a generic html div element</div>
<div class="short-div">Also a div</div>
<div class="short-div">Still a div</div><p><a>Orbital webpage</a></p>
<p><a>NUS Hackers webpage</a></p><p><a>Orbital webpage</a></p>
<p><a>NUS Hackers webpage</a></p>
<br>
<hr><p><a href="https://orbital.comp.nus.edu.sg/">Orbital webpage</a></p>
<p><a href="https://hckr.cc">NUS Hackers webpage</a></p><form>
<input>
<button>Submit</button>
</form><input required><button type="submit">Submit</button><input placeholder="query" required><button>This button does nothing</button><button id="useless-button">This button does nothing</button><img id="html-logo" src="html-logo.png">
<!-- i also gave it an id since it is again a unique element and i want it to have some unique styling later --><img id="html-logo" src="html-logo.png" alt="html logo"><img id="html-logo" src="./html-logo.png" alt="html logo" title="html logo"><h1>Welcome</h1>
<p>This is my first page</p>
<div class="long-div">This is a div, a generic html element</div>
<div class="long-div">This is also a generic html div element</div>
<div class="short-div">Also a div</div>
<div class="short-div">Still a div</div>
<br>
<br>
<p><a href="https://orbital.comp.nus.edu.sg/">Orbital webpage</a></p>
<p><a href="https://hckr.cc">NUS Hackers webpage</a></p>
<br>
<hr>
<form>
<input placeholder="query" required>
<button type="submit">Submit</button>
</form>
<br>
<button id="useless-button">This button does nothing</button>
<br><br>
<img id="html-logo" src="./html-logo.png" alt="html logo" title="html logo">







# .github/workflows/cd.yml
name: Deploying to Github Pages
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Fetch repository
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'yarn'
- name: Install dependencies
run: |
yarn
- name: Building
run: |
NODE_ENV=production yarn build
- name: Uploading production artifacts
uses: actions/upload-pages-artifact@v3
with:
name: github-pages
path: dist
deploy:
needs: build
runs-on: ubuntu-latest
permissions:
pages: write
id-token: write
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
steps:
- name: Publishing production artifact
id: deployment
uses: actions/deploy-pages@v4
with:
artifact_name: github-pageson:
push:
branches: [main]jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Fetch repository
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'yarn'
- name: Install dependencies
run: |
yarn
- name: Building
run: |
NODE_ENV=production yarn build
- name: Uploading production artifacts
uses: actions/upload-pages-artifact@v3
with:
name: github-pages
path: dist - name: Uploading production artifacts
uses: actions/upload-pages-artifact@v3
with:
name: github-pages
path: dist deploy:
needs: build
runs-on: ubuntu-latest permissions:
pages: write
id-token: write environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }} steps:
- name: Publishing production artifact
id: deployment
uses: actions/deploy-pages@v4
with:
artifact_name: github-pagesgit add .github/workflows/cd.yml
git commit -m "Add CD workflow"
git pushError: Failed to create deployment (status: 404) with build version 6dbc2327e55394cd2690908b1b23d14eddb4a3cb. Request ID 1481:16FC0C:39A4ED5:737FC5E:67EA91C2 Ensure GitHub Pages has been enabled: https://github.com/woojiahao-git-mastery/cicd-calculator/settings/pageshttps://api.nusmods.com/v2/{acadYear}/modules/{courseCode}.jsonhttps://api.nusmods.com/v2/2023-2024/modules/CS2030S.json<form id="nusmods-form">
<input id="course-code" placeholder="Course code (ex: MA1301)" required>
<button type="submit">Find course</button>
</form><div id="course-container" hidden></div>document.addEventListener("DOMContentLoaded", function() {
document.getElementById("nusmods-form").addEventListener(
"submit",
() => {
event.preventDefault(); // prevent the form from submitting to the default path "/" using this line
// note: event is deprecated, but still works
// try instead to look at form validation functions
// or try using this: arguments[0].preventDefault();
getCourseData(); // this function will have the fetch request
}
)
})function getCourseData() {
let inputField = document.getElementById("course-code");
let courseCode = inputField.value.toUpperCase();
// fetch request here
}https://api.nusmods.com/v2/2023-2024/modules/{courseCode}.jsonfetch(`https://api.nusmods.com/v2/2023-2024/modules/${courseCode}.json`, {});fetch(`https://api.nusmods.com/v2/2023-2024/modules/${courseCode}.json`, {})
.then(response => response.status === 200 ? response.json() : null);fetch(`https://api.nusmods.com/v2/2023-2024/modules/${courseCode}.json`, {})
.then(response => response.status === 200 ? response.json() : null)
.then(data => {
if (data === null) {
// course code was invalid, display an error
} else {
// data contains the course details in a JSON format
// read the documentation to see what properties are relevant to your needs
// OR copy paste the example URL from the start of this section into your browser
// to see the JSON for yourself
}
});<div id="course-container" hidden>
<h1 id="cc"></h1>
<h2 id="course-title"></h2>
<p id="course-description"></p>
<p id="credits"></p>
</div>let element = document.querySelector("div");
element.innerHTML = "<h1>Heading</h1>";let element = document.querySelector("div");
element.innerHTML = "<script>alert('HTML Injection Successful 😈');</script>";









This section will tackle our very first task: running unit tests.
The most fundamental questions that Github Actions workflows answers are:
When will it run?
What will it do?
"When will it run?" is answered by specifying the events that the workflow responds to. In our scenario, we want this workflow to run when a pull request is created on the repository.
"What will it do?" can be then broken down into several more guiding questions?
Which OS should this run in?
Are there key differences between OSes that should be accounted for?
What programming language is this project written in?
What steps should be run to achieve the given workflow?
We will take a look at how we can answer questions (4) and (5) in a following section. Let's first tackle questions (1), (2), and (3) in our scenario.
In the case of calculator.test.ts, the OS we run it on does/should not matter as there are no OS specific test cases. For simplicity, we will pick Ubuntu.
The project is written in React, so it depends on having Node.js available.
We first need to get the repository, install all necessary dependencies, and run the test script that is provided (see package.json)
You might be asking yourself, "What if I have unit tests that are specific to an OS? Or version of Node.js? Or Ubuntu version?" Github Actions supports something it calls , that run a given workflow across a matrix of variables. We will briefly dive into it in the !
Now that we have clearly outlined the key details of this workflow, let's get down to writing your very first workflow file. Remember, all Github Actions workflows must reside in .github/workflows, so create a new file called ci.yml!
Let's break down each section and explain what we're doing.
We are giving the workflow a name that Github Actions will display. If the name is omitted, GitHub displays the workflow file path relative to the root of the repository.
Then, we specify the times where this workflow runs, aka "when will it run?". They are specified with the on key and the values correspond to the .
In our case, we know that we want to run the workflow during a pull request, so we have included the pull_request event. We have also included the workflow_dispatch event, so that we are able to manually trigger this workflow from Github without requiring a pull request (more about it ). This is particularly useful if we want to verify that the workflow works without going through the hassle of creating a pull request.
We then start to specify the jobs that comprise the workflow. We give the job a job ID of unit-tests.
Recall that we said that every job runs in its own runner, which is a virtual machine by default. Therefore, we need to specify the OS that our unit-tests job will execute in, which we have decided earlier to be Ubuntu. As we are not particular about the version of Ubuntu we will use, we can use the ubuntu-latest , which is one of the .
This means that we can think of every step of this job executing within an Ubuntu virtual machine (because they really do!). So we will be using bash commands, and we will have access to things like the that is available in Ubuntu.
We declare the steps of a job under the steps array as a list of dictionaries. As we are running the job in a virtual machine, we will basically have an empty machine at the start of the job, and it is our responsibility to start populating and interacting with this empty machine to create the intended workflow.
The very first step we need to do is fetch the current repository, so that the virtual machine runner has access to the project structure, and more importantly, the unit tests. We give the step a human readable name (that is also displayed on Github) using the name key. If there is no name provided, Github will display the script or action that is being used.
Here, we use an action — which are reusable extensions that perform some set of operations — called actions/checkout@v4. You can read more about what the action is comprised of , but essentially, we are using it to perform a checkout of the current repository, retrieving all of its contents (from the latest commit) onto the virtual machine runner.
Then, we need to setup Node.js on the virtual machine runner. We can use the action actions/setup-node@v4 to setup Node.js automatically for us. You can read more about the action .
We can specify inputs for the action using the with key, providing the various inputs as a dictionary. In this case, we want to use Node.js version 20, and we want to use the yarn package manager — instead of npm — as that is what we have used for the example application.
Before we can run the unit tests, we need to ensure that all of the necessary project dependencies are retrieved. This is where we can use scripts in steps.
Recall in , we mention that all steps in the job are effectively running on Ubuntu, so we will specify scripts that use Ubuntu's built-in shell: bash. You might want to specify a different shell for certain use cases, which .
So, we declare our script through the run string. The pipe operator after the run key indicates that we are specifying a multi-line string in YAML.
The script we will run this time is yarn, which effectively installs the project's dependencies. You can think of this as running yarn directly in an Ubuntu terminal, where the current folder ($GITHUB_WORKSPACE) is the root of the Node.js project.
Finally, we can start to execute the unit tests of the project. Again, we use a script, but this time, the script will be NODE_ENV=production yarn test which effectively sets the environment variable NODE_ENV to be of value production.
test is a script that we have provided in the example application, and it essentially runs vitest , executing calculator.test.ts.
Voilà! You have successfully written your very first Github Actions workflow! Simple isn't it? Let's recap what we did.
At a glance, this is the high-level overview of the new CI/CD pipeline you have written.
To properly visualize and understand how the filesystem of the virtual machine runner changes throughout the job, we have also created this visualization (bolded text are the changes between steps):
Once you have added the workflow, you need to commit and push it!
Then, we can start to verify that the Github Action works as intended! This is where the workflow_dispatch event comes in handy, where we are able to manually trigger Github Actions. If you navigate to your fork of the example repository, you can visit the Actions tab. You will see the following:
It lists the workflows available, and what has run/are running. We are interested in our new pipeline CI/CD Pipeline, so select it and you should see the following:
It looks almost the same, but this time, there is a banner that tells you that "This workflow has a workflow_dispatch event trigger". Then, there is a dropdown to "Run workflow", select it and stick with main and click the "Run workflow" button:
Refresh the page, and you will now see a new entry in the table:
Give it a few seconds and then click into the action. You will see that the unit tests have failed:
This is because we have intentionally made one of the unit tests to fail (divide ½ is not 0.4!). Let's try to fix this unit test while exploring the the pull_request event!
The following steps expects some level of understanding of Git. You can refer to our !
This guide is not a software engineering exercise, so we will tell you exactly where the error is and we will focus on examplifying the pull_request event.
We had intentionally set one of the unit test assertions to be incorrect, specifically along :
We have set the expected value to be 0.4 when it should clearly be 0.5 ! Let's fix this as a pull request to your own repository to see the pull_request event in action.
Create a new branch, called fix-unit-test:
Then, go to the file calculator.test.ts and fix line 13 to be the following:
Then, add the file and create a commit. The commit message can be anything you want:
Finally, push the branch to your fork:
Then, go to your fork on Github and create a new pull request. Pull request > New pull request. Give the pull request any title and you can leave the description blank.
The base branch should be YOUR main branch, not the original repository's branch!
Create the pull request and wait a while. You will notice that the component towards the bottom changes to this:
This is how you know that your workflow is running and it was triggered by the pull_request!
Now, if you select the CI/CD pipeline running, you will be brought back to the same page as earlier, instead, this time, you will notice that the workflow passes!
In fact, you will even see the individual steps of the job unit-tests that you defined! Wonderful! You have successfully:
Created a new Github Actions workflow
Observed how a failing unit test might look like
Fixed and verified that the pull_request event is working
Go ahead and merge the pull request into main and update your local repository to receive the latest changes:
Now that we have implemented the very first step, let's take a look at implementing step 2: linting!

Can certain tasks be split into separate jobs and run in parallel?
What are jobs that depend on others?










# .github/workflows/ci.yml
name: CI/CD Pipeline
on: [pull_request, workflow_dispatch]
jobs:
unit-tests:
runs-on: ubuntu-latest
steps:
- name: Fetch repository
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'yarn'
- name: Install dependencies
run: |
yarn
- name: Run unit tests
run: |
NODE_ENV=production yarn testname: CI/CD Pipelineon: [pull_request, workflow_dispatch]jobs:
unit-tests: runs-on: ubuntu-latest steps:
- name: Fetch repository
uses: actions/checkout@v4 - name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'yarn' - name: Install dependencies
run: |
yarn - name: Run unit tests
run: |
NODE_ENV=production yarn testgit add .github/workflows/ci.yml
git commit -m "Add CI workflow"
git push -u origin maintest('divide 1 / 2 to equal 0.5', () => {
expect(divide(1, 2)).toBe(0.4);
});git checkout -b fix-unit-test expect(divide(1, 2)).toBe(0.5);git add src/calculator.test.ts && git commit -m "Fix unit test"git push -u origin fix-unit-testgit checkout main
git fetch origin main && git merge origin/mainYou can get the slides for this workshop here:
Writing code or editing files on a computer has a lot of moving parts: you spend a lot of time switching between files, navigating and editing code compared to writing a long stream of words sometimes.
As programmers, or just general power users, we will spend a lot of time doing these things, so it is extremely beneficial to invest some time into learning an editor.
People tend to have extremely strong opinions on editors to learn. See:
For purposes of this workshop, we'll be trying to learn vim, a command line editor. Feel free to bring your experience of learning a new editor to any other editor
Within NUS, vim is the main editor you'll learn and use (you'll probably use it if you've done CS2030 within SOC). Vim is an editor with an extremely rich history. Vim is actually an acronym for VI iMproved, created by the late Bram Moolenaar in 1991, who unfortunately passed on in 2023.
As the name suggests, it was based of another text editor, vi, created by Bill Joy in 1976.
Vim is built around a bunch of cool ideas, and a lot of tools support vim emulation. It is probably really useful to learn the neat ideas of vim even if you end up using a different editor for your day to day use.
Bill Joy was trying to create an editor that was usable with a 300 baud modem. To put this into context, this is approximately 0.3 kbits/second connection, and transmitting text was often much slower than the time it took to read it. This is why vim has somewhat unintuitive commands when you first start out. Because you could type only about one letter a second, the commands had to be really, really short.
As it turns out, if you build an editor with the purpose of minimizing keystrokes, you have a really efficient editor.
The key idea behind vim is that as a power user, you spend most of the time reading, not writing. As such, vim is built to be a modal editor: it has different modes for inserting text and manipulating text. Vim itself is a programmable with Vimscript and other languages. As such, there is also a huge thriving plugin community around vim. Vim itself is also a programmable interface, keystrokes are commands and we can compose them to do complex actions. While vim does have mouse support, to efficiently use vim, we very much avoid the use of the mouse, simply because it's too slow; Vim even avoids using the arrow keys because it would take too much time to move your hands to the arrow keys.
Vim is built to be as efficient as possible
Vim itself is a programmable interface: individual keystrokes become our commands and we can combine them to do some cool stuff.
Vim tends to avoid anything that requires the movement of the hand off the homerow of the keyboard. This means less emphasis on arrow keys and the mouse, even though they still work in vim.
There are a few primary modes of vim:
Normal Mode - For moving around a file and making small edits
Insert Mode - For inserting text
Visual Mode - For selecting blocks of text
Command Mode - For entering commands
Keystrokes have very different meanings in different operating modes. For example, xin insert mode will just insert the 'x' character, but in normal mode it would delete a character.
In its default configuration, Vim shows the current mode in the bottom left. The initial/default mode is Normal mode. You’ll generally spend most of your time between Normal mode and Insert mode.
There's a common joke that if you give a web designer a computer with vim loaded up, and ask them to quit vim, you will get a random string generator. To open vim, just type vimin the terminal. To quit, just type :q. This might seem unintuitive at first, but we'll explain it as we go along.
If you're confused why vim is built this way, remember that the whole philosophy of vim is doing things with as little keystrokes as possible.
Command mode is where we run commands similar to a command line in vim. To go to this mode, simple press : . A text bar should appear at the bottom of the screen. From there, we can execute several vim commands.
: - go to command mode
q (in command mode) - quits the file
w (in command mode) - saves the file
! - force an action
Once you are done, vim should automatically put you back in normal mode
Normal mode is your default mode where you should spend most of your time. In vim, if you ever get lost or are not sure what is happening, always reset to normal mode Esc will bring you to normal mode from any of the other modes. You will be using this a lot.
It might seem quite counterintuitive to use escape since it’s quite out of place on your keyboard. However, vi was created using an ADM-3A terminal. It looks like this:
Notice where the Esckey is?
You will use the Esckey a lot when using vim. Consider remapping your Caps Lock:
If you're using Linux, you'll figure it out ;)
First let’s open up a file using vim by using vim (filename). We can navigate the file by using hjkl (left, down, up, right respectively) Why not arrow keys (or a mouse)? Historically, it’s because the old keyboards did not have arrow keys or a mouse. However, in practice, it is extremely efficient as you don’t need to move you hands away from the alphanumeric keys to do anything.
Here are more movement options in Normal mode:
Open a text file using the command vim file. It could be any piece of text. Try navigating around the file with the above commands!
Make sure you are in normal mode. Esc
i to insert before cursor
I to insert at the start of line
a to insert after cursor
Get out of insert once done. Esc
In normal word, you can quickly delete a portion of text:
d + modifier, deletes a certain portion based on the modifier
dw – delete word
6dw – delete 6 words
Similarly, change allows you to quickly delete and change a certain portion of text:
c + modifier – deletes then puts you into insert mode
cw – change word
7cw – change 7 words
In normal mode, yank (y) copies text into a buffer (think of Ctrl + C). The following commands are variations:
yy – yank the entire line
yw – yank a single word
6yw – yank 6 words
x – delete a certain character
r – replace a character
. – repeat the last action
You can combine nouns and verbs with a count, which will perform a given action a number of times.
3w move 3 words forward
5j move 5 lines down
7dw delete 7 words
You can use modifiers to change the meaning of a noun. Some modifiers are i, which means “inner” or “inside”, and a, which means “around”.
ci( change the contents inside the current pair of parentheses
ci[ change the contents inside the current pair of square brackets
da' delete a single-quoted string, including the surrounding single quotes
Try and fix the typos and small erros here!
There are a few kinds of visual modes:
Visual – v
Visual Line – V
Visual Block – Ctrl + v
You can use these selections along with the commands covered above:
y (yank)
d (delete)
c (change)
It seems like the data from the first two lines are corrupted, lets remove them from our data!
Aside from saving and quitting, here are a few more important commands to know:
:enew – opens a new file
:e <filepath> – open the file at the specified path
:sp – open a new horizontal split
Macros are one of the really powerful features in Vim that can significantly speed up your workflow:
q + register to start recording a macro, then q again to stop recording
@<register> to apply (play back) the macro
Let’s use a macro to extract the names from these emails!
There are tons of plugins for extending Vim. Contrary to outdated advice that you might find on the internet, you do not need to use a plugin manager for Vim (since Vim 8.0).
https://github.com/amix/vimrc
https://vimconfig.com
Without giving a long, non-exhaustive list of plugins, here are some really cools ones you should try out!
These plugins not only try to speed up your vim workflows, but also add additional functionality to vim.
Many tools support Vim emulation. The quality varies from good to great; depending on the tool, it may not support the fancier Vim features, but most cover the basics pretty well.
Vim is a really powerful editor if you are able to master it. Don’t worry, there are plenty of resources!
:help <command> – get the manual for a specific command
vimtutor – built-in Vim tutor. Give it a read; it shouldn’t take more than 30 minutes
VimGolf – a really good practice site: edit the file in the fewest keystrokes
This workshop was loosely based of MIT's Missing Semester of your CS Education:
:q! - force quit file without saving
Braces
%: go to corresponding braces
Repeating
10j: to go down 10 times
Scroll
Ctrl + d: scroll down, Ctrl + u: scroll up
Find (inline)
f: to find further up the line, F: to find everything before the cursor, ,/;: to navigate between results
Search (file)
/ + query: to search forward from the cursor, ?to search backwards from the cursor
A to insert at the end of line o to start a next line and insert
O to start a line above the current selection and insert
dd – delete the entire line
d$ – delete till end of line
dt + char – delete till character
c$ – change till end of line
ct + char – change till certain character
yt + char – yank till (but not including) a certain characterp – put/paste whatever is in the buffer (below the current line)
P – put/paste whatever is in the buffer (above the current line)
u – undo the previous actionCtrl + r – redo the last undone action
:vsp – open a new vertical split:sort- sort selected text
Basic
hjkl: left, down, up, right
Word
w: next word, b: back a word
File
gg: go to top of file, G: go to bottom of file
Line
0: beginning of line, $: end of line, ^: first non-whitespace of line
Line Numbers
34G: Go to line 34
Screen
H: igh part of screen, M:iddle of screen, L:ow part of screen

The basics of the Zig language are quite straightforward. Given here are examples of each basic concept, which should be picked up and experimented upon.
We have to import the standard library here using @import("std"), which we then store in the stdvariable. We can access functions (and types) from the standard library using the .syntax.
Arrays in Zig have a fixed size (defined in the type of the array). There aren't many differences between Zig arrays and those found in C, C++, Java or Go besides syntax.
Zig defines two kind of pointers: pointers to a single value, and pointers to multiple values. This is a departure from C and C++, where a pointer to an array with 1000000 integer looks the same as a pointer to a single integer (int*).
Unions allow you to store one of their members at a time, instead of all at once like in a struct. Zig unions can be treated similarly to C unions, except that they do throw a runtime error if you access the incorrect member (at the cost of a larger runtime size due to storing extra info).


const std = @import("std");
pub fn main() !void {
std.debug.print("Hello, world!\n", .{});
}
// ints
const my_32_bit_int: i32 = -42;
const my_64_bit_int: i64 = -323;
const my_32_bit_unsigned_int: u32 = 3424;
const my_64_bit_unsigned_int: u64 = 34;
// more ints ...?
const my_17_bit_int: i17 = 17;
const my_38_bit_unsigned_int: i38 = 38;
// floats
const my_32_bit_float: f32 = 3.14;
const my_64_bit_float: f64 = 3.14159;
// bool
const my_bool: bool = true;
// string
const my_string: []const u8 = "Hello, world!";
std.debug.print("32-bit int: {}\n", .{my_32_bit_int});
std.debug.print("64-bit int: {}\n", .{my_64_bit_int});
std.debug.print("32-bit unsigned int: {}\n", .{my_32_bit_unsigned_int});
std.debug.print("64-bit unsigned int: {}\n", .{my_64_bit_unsigned_int});
std.debug.print("17-bit int: {}\n", .{my_17_bit_int});
std.debug.print("38-bit unsigned int: {}\n", .{my_38_bit_unsigned_int});
std.debug.print("32-bit float: {}\n", .{my_32_bit_float});
std.debug.print("64-bit float: {}\n", .{my_64_bit_float});
std.debug.print("bool: {}\n", .{my_bool});
std.debug.print("string: {s}\n", .{my_string});// Arrays have a fixed size.
var my_int_array = [5]i32{ 1, 2, 3, 4, 5 };
// You can use the `_` character to have the compiler infer the size.
const my_other_int_array = [_]i32{ 1, 2, 3, 4, 5 };
std.debug.print("my_int_array: {any}\n", .{my_int_array});
std.debug.print("my_other_int_array: {any}\n", .{my_other_int_array});
// `len` is the only field of an array.
std.debug.print("length of my_int_array: {any}\n", .{my_int_array.len});
// Access and modify items within an array using the `[]` syntax.
std.debug.print("my_int_array[1]: {}\n", .{my_int_array[1]});
// Arrays are copied by default.
const my_copied_int_array = my_int_array;
my_int_array[2] = 33;
std.debug.print("my_copied_int_array: {any}\n", .{my_copied_int_array});
// Arrays can have sentinel values.
const my_int_sentinel_array = [5:0]i32{ 1, 2, 3, 4, 5 };
std.debug.print("my_int_sentinel_array: {any}\n", .{my_int_sentinel_array});
std.debug.print("my_int_sentinel_array.len: {any}\n", .{my_int_sentinel_array.len});
std.debug.print("my_int_sentinel_array (with sentinel): {any}\n", .{@as([6]i32, @bitCast(my_int_sentinel_array))});// Pointers work similarly to C or C++.
var some_int: i32 = 42;
const some_int_pointer: *i32 = &some_int;
std.debug.print("some_int_pointer: {*}\n", .{some_int_pointer});
// Zig distinguishes between pointers to single values and pointers to multiple values (C-style arrays).
var some_int_array = [3]i32{ 1, 2, 3 };
const single_int_pointer: *i32 = &some_int_array[0];
const many_int_pointer: [*]i32 = &some_int_array;
std.debug.print("single_int_pointer: {*}\n", .{single_int_pointer});
std.debug.print("many_int_pointer: {*}\n", .{many_int_pointer});
std.debug.print("single_int_pointer == many_int_pointer: {}\n", .{@intFromPtr(single_int_pointer) == @intFromPtr(many_int_pointer)});
// Just like arrays, pointers to multiple values can have sentinel values.
var some_int_sentinel_array = [3:0]i32{ 1, 2, 3 };
const many_int_sentinel_pointer: [*:0]i32 = &some_int_sentinel_array;
std.debug.print("many_int_sentinel_pointer[3]: {}\n", .{many_int_sentinel_pointer[3]});var some_array = [_]i32{ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 };
// Slices are a view into an array.
const some_slice: []i32 = some_array[2..6];
std.debug.print("some_slice: {any}\n", .{some_slice});
// Slices are represented by a pointer to the first element and a length.
std.debug.print("some_slice.ptr: {*}\n", .{some_slice.ptr});
std.debug.print("some_slice.len: {}\n", .{some_slice.len});
std.debug.print("some_slice.ptr == &some_array[2]: {}\n", .{@intFromPtr(some_slice.ptr) == @intFromPtr(&some_array[2])});
// Slices should be treated as pointers to arrays. Modifying the slice modifies the original array.
some_slice[2] = 33;
std.debug.print("some_slice: {any}\n", .{some_slice});
std.debug.print("some_array: {any}\n", .{some_array});
// Slices can be sliced further.
const some_subslice = some_slice[1..3];
std.debug.print("some_subslice: {any}\n", .{some_subslice});
// Just like arrays, slices can have sentinel values.
var some_sentinel_array = [10:0]i32{ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 };
const some_sentinel_slice: [:0]i32 = some_sentinel_array[2..10];
std.debug.print("some_sentinel_slice: {any}\n", .{some_sentinel_slice});
std.debug.print("some_sentinel_slice[8]: {}\n", .{some_sentinel_slice[8]});const x = 5;
if (x > 7) {
std.debug.print("x is greater than 7!\n", .{});
} else {
std.debug.print("x is smaller than or equal to 7...\n", .{});
}
// If-else can also be used as expressions rather than statements.
const y = if (x > 4) 10 else 20;
std.debug.print("y: {}\n", .{y});const x = 34;
switch (x) {
1...5 => {
std.debug.print("x is between 1 and 5!\n", .{});
},
6...10 => {
std.debug.print("x is between 6 and 10!\n", .{});
},
else => {
std.debug.print("x is not between 1 and 10...\n", .{});
},
}
// Switch can also be used as an expression rather than a statement.
const y = switch (x) {
1...5 => 10,
6...10 => 20,
else => 30,
};
std.debug.print("y: {}\n", .{y});var x: i32 = 0;
while (x < 32) {
std.debug.print("x: {}\n", .{x});
x += 1;
}
// You can also pass an expression to perform each iteration.
var y: i32 = 0;
while (y < 32) : (y += 1) {
std.debug.print("y: {}\n", .{y});
}var some_array = [_]i32{ 1, 2, 3, 4, 5 };
for (some_array) |item| {
std.debug.print("array item: {}\n", .{item});
}
// Slices work too!
const some_slice = some_array[1..4];
for (some_slice) |item| {
std.debug.print("slice item: {}\n", .{item});
}
// You can also iterate over pointers to each element rather than the value.
for (some_slice) |*item| {
std.debug.print("slice item pointer: {*}\n", .{item});
}
std.debug.print("{any}\n", .{some_slice});const std = @import("std");
const Point = struct {
x: i32,
y: i32,
};
const Rect = struct {
top_left: Point,
bottom_right: Point,
// You can define methods in structs.
fn area(self: Rect) i32 {
return (self.bottom_right.x - self.top_left.x) * (self.bottom_right.y - self.top_left.y);
}
};
pub fn main() !void {
var point1 = Point{ .x = 32, .y = 32 };
const point2 = Point{ .x = 99, .y = 44 };
const rect = Rect{ .top_left = point1, .bottom_right = point2 };
std.debug.print("point1: {any}\n", .{point1});
std.debug.print("point2: {any}\n", .{point2});
std.debug.print("rect: {any}\n", .{rect});
// You can access struct members using `.`. Works for nesting too.
std.debug.print("point1.x: {}\n", .{point1.x});
std.debug.print("rect.bottom_right.y: {}\n", .{rect.bottom_right.y});
// Methods are accessed in a similar way.
std.debug.print("rect.area(): {}\n", .{rect.area()});
// Structs are copied.
point1.x = 99;
std.debug.print("point1.x: {}\n", .{point1.x});
std.debug.print("rect.top_left.x: {}\n", .{rect.top_left.x});
}
const std = @import("std");
const Color = enum {
red,
green,
blue,
yellow,
brown,
// ...
};
// The integer representation of enums can be overrided.
const Operation = enum(u8) {
add = 0,
sub = 1,
mul = 2,
div = 3,
rem = 4,
shift_left = 5,
shift_right = 6,
// Enums can have methods too!
fn name(self: Operation) []const u8 {
switch (self) {
.add => return "add",
.sub => return "sub",
.mul => return "mul",
.div => return "div",
.rem => return "rem",
.shift_left => return "shift_left",
.shift_right => return "shift_right",
}
}
};
pub fn main() !void {
std.debug.print("red: {any}\n", .{Color.red});
std.debug.print("blue: {any}\n", .{Color.blue});
std.debug.print("mul: {any}\n", .{Operation.mul});
std.debug.print("mul (tag value): {}\n", .{@intFromEnum(Operation.mul)});
std.debug.print("shift_left: {any}\n", .{Operation.shift_left});
std.debug.print("shift_left (tag value): {}\n", .{@intFromEnum(Operation.shift_left)});
// You can also use enums in switch statements.
const some_color = Color.red;
switch (some_color) {
.red => std.debug.print("some_color is red\n", .{}),
.green => std.debug.print("some_color is green\n", .{}),
.blue => std.debug.print("some_color is blue\n", .{}),
.yellow => std.debug.print("some_color is yellow\n", .{}),
.brown => std.debug.print("some_color is brown\n", .{}), // try removing this and compiling
}
}const std = @import("std");
// We could use an enum to represent a shape, but we can't store any
// shape data within it.
const ShapeEnum = enum {
circle,
rectangle,
square,
};
// We can use a union instead. A union is like a struct, but it can only
// store one of its members at a time, rather than all at once.
const ShapeUnion = union {
circle: struct { radius: f32 },
rectangle: struct { width: f32, height: f32 },
square: struct { size: f32 },
};
const TaggedShapeUnion = union(ShapeEnum) {
circle: struct { radius: f32 },
rectangle: struct { width: f32, height: f32 },
square: struct { size: f32 },
};
// We can also use an automatic enum to tag the union.
const TaggedShapeUnionAutomatic = union(enum) {
circle: struct { radius: f32 },
rectangle: struct { width: f32, height: f32 },
square: struct { size: f32 },
};
pub fn main() !void {
// We can access members of a union directly.
const some_rectangle = ShapeUnion{ .rectangle = .{ .width = 3.14, .height = 2.71 } };
std.debug.print("some_rectangle has width {} and height {}\n", .{ some_rectangle.rectangle.width, some_rectangle.rectangle.height });
// But what if we do the following???
// ------------ try to uncomment the code below -------------
// std.debug.print("some_rectangle has radius {}\n", .{some_rectangle.circle.radius});
// Notice how given a shape of type ShapeUnion, we don't know which kind
// shape it is? We can't switch on it...
// ------------ try to uncomment the code below -------------
// const some_shape = ShapeUnion{ .circle = .{ .radius = 3.14 } };
// switch (some_shape) {
// .circle => std.debug.print("some_shape is a circle of radius {}\n", .{some_shape.circle.radius}),
// .rectangle => std.debug.print("some_shape is a rectangle of width {} and height {}\n", .{ some_shape.rectangle.width, some_shape.rectangle.height }),
// .square => std.debug.print("some_shape is a square of size {}\n", .{some_shape.square.size}),
// }
// We must "tag" the union with an enum to know which kind of shape it is.
const some_shape = TaggedShapeUnion{ .circle = .{ .radius = 3.14 } };
switch (some_shape) {
.circle => std.debug.print("some_shape is a circle of radius {}\n", .{some_shape.circle.radius}),
.rectangle => std.debug.print("some_shape is a rectangle of width {} and height {}\n", .{ some_shape.rectangle.width, some_shape.rectangle.height }),
.square => std.debug.print("some_shape is a square of size {}\n", .{some_shape.square.size}),
}
}const std = @import("std");
pub fn foo(x: i32, y: f32) f32 {
std.debug.print("inside the foo function... x: {}, y: {}\n", .{ x, y });
return @as(f32, @floatFromInt(x + 2)) * y;
}
pub fn main() !void {
std.debug.print("foo(3, 5.34) returned {}\n", .{foo(3, 5.34)});
}// Zig values can never be `null`, unless they are explicitly marked as optional.
const some_int: i32 = 34;
const some_optional_int: ?i32 = 34;
const some_optional_int_null: ?i32 = null;
std.debug.print("some_int: {}\n", .{some_int});
std.debug.print("some_optional_int: {any}\n", .{some_optional_int});
std.debug.print("some_optional_int_null: {any}\n", .{some_optional_int_null});
// You can check if an optional is null using an if-else, and unwrap it at the same time.
if (some_optional_int) |an_int| {
std.debug.print("some_optional_int is not null: {}\n", .{an_int});
} else {
std.debug.print("some_optional_int is null\n", .{});
}
// You can also unwrap it with a default value.
const an_int = some_optional_int orelse 0;
std.debug.print("an_int: {}\n", .{an_int});
// If you know an optional is definitely not null, you can unwrap it using `.?`.
std.debug.print("some_optional_int is definitely not null: {}\n", .{some_optional_int.?});
// But this will crash if you attempt to unwrap a null.
// ------------ try to uncomment the code below -------------
// std.debug.print("some_optional_int_null is definitely not null: {}\n", .{some_optional_int_null.?});
// Optionals aren't for free. They take up more space.
std.debug.print("size of i32: {}\n", .{@sizeOf(i32)});
std.debug.print("size of ?i32: {}\n", .{@sizeOf(?i32)});
// However, they're free if the underlying value is a pointer! Any guesses why?
std.debug.print("size of *i32: {}\n", .{@sizeOf(*i32)});
std.debug.print("size of ?*i32: {}\n", .{@sizeOf(?*i32)});
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- FOR THE CURIOUS: This site was made by @thebarrytone. Don't tell my mom. -->
<title>Motherfucking Website</title>
</head>
<body>
<header>
<h1>This is a motherfucking website.</h1>
<aside>And it's fucking perfect.</aside>
</header>
<h2>Seriously, what the fuck else do you want?</h2>
<p>You probably build websites and think your shit is special. You think your 13 megabyte parallax-ative home page is going to get you some fucking Awwward banner you can glue to the top corner of your site. You think your 40-pound jQuery file and 83 polyfills give IE7 a boner because it finally has box-shadow. Wrong, motherfucker. Let me describe your perfect-ass website:</p>
<ul>
<li>Shit's lightweight and loads fast</li>
<li>Fits on all your shitty screens</li>
<li>Looks the same in all your shitty browsers</li>
<li>The motherfucker's accessible to every asshole that visits your site</li>
<li>Shit's legible and gets your fucking point across (if you had one instead of just 5mb pics of hipsters drinking coffee)</li>
</ul>
<h3>Well guess what, motherfucker:</h3>
<p>You. Are. Over-designing. Look at this shit. It's a motherfucking website. Why the fuck do you need to animate a fucking trendy-ass banner flag when I hover over that useless piece of shit? You spent hours on it and added 80 kilobytes to your fucking site, and some motherfucker jabbing at it on their iPad with fat sausage fingers will never see that shit. Not to mention blind people will never see that shit, but they don't see any of your shitty shit.</p>
<p>You never knew it, but this is your perfect website. Here's why.</p>
<h2>It's fucking lightweight</h2>
<p>This entire page weighs less than the gradient-meshed facebook logo on your fucking Wordpress site. Did you seriously load 100kb of jQuery UI just so you could animate the fucking background color of a div? You loaded all 7 fontfaces of a shitty webfont just so you could say "Hi." at 100px height at the beginning of your site? You piece of shit.</p>
<h2>It's responsive</h2>
<p>You dumbass. You thought you needed media queries to be responsive, but no. Responsive means that it responds to whatever motherfucking screensize it's viewed on. This site doesn't care if you're on an iMac or a motherfucking Tamagotchi.</p>
<h2>It fucking works</h2>
<p>Look at this shit. You can read it ... that is, if you can read, motherfucker. It makes sense. It has motherfucking hierarchy. It's using HTML5 tags so you and your bitch-ass browser know what the fuck's in this fucking site. That's semantics, motherfucker.</p>
<p>It has content on the fucking screen. Your site has three bylines and link to your dribbble account, but you spread it over 7 full screens and make me click some bobbing button to show me how cool the jQuery ScrollTo plugin is.</p>
<p>Cross-browser compatibility? Load this motherfucker in IE6. I fucking dare you.</p>
<h2>This is a website. Look at it. You've never seen one before.</h2>
<p>Like the man who's never grown out his beard has no idea what his true natural state is, you have no fucking idea what a website is. All you have ever seen are shitty skeuomorphic bastardizations of what should be text communicating a fucking message. This is a real, naked website. Look at it. It's fucking beautiful.</p>
<h3>Yes, this is fucking satire, you fuck</h3>
<p>I'm not actually saying your shitty site should look like this. What I'm saying is that all the problems we have with websites are <strong>ones we create ourselves</strong>. Websites aren't broken by default, they are functional, high-performing, and accessible. You break them. You son-of-a-bitch.</p>
<blockquote cite="https://www.vitsoe.com/us/about/good-design">"Good design is as little design as possible."<br>
- some German motherfucker
</blockquote>
<hr>
<h2>Epilogue</h2>
<p>From the philosophies expressed (poorly) above, <a href="http://txti.es">txti</a> was created. You should try it today to make your own motherfucking websites.</p>
<!-- yes, I know...wanna fight about it? -->
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-45956659-1', 'motherfuckingwebsite.com');
ga('send', 'pageview');
</script>
</body>


