Django: Create and test a view with two forms

Hello!

I’ve been working on a front page for a personal site. The first thing was to decide what should be in it. After some research and much thought, I decided to have both the login and register forms in the page, along with a single phrase that describes the site. I like this design mostly because the user is at 0-click distance from both login and register.

Having Linkedin and Quora as my main inspirations, I eventually implemented it. Here is the first version:

front_page
Login and register form in front page

I should improve the brand name and its core message soon, but I liked the design.

Fine, enough said about the design, let’s see how it works. FYI, I’m using Django 2.1 and assuming that you know how to build a view with a single form, in case this premise isn’t valid, refer to Django forms.

I found two options for having two forms in a single page:

1) Have separate views and use the same template in both.

2) Have a single view and template.

I don’t like 1) because, as far as my knowledge goes, it requires two different URLs for the “same” front page. Even though handling the forms and creating tests for them easy.

To follow with this option you just need to create two views and point the forms to their respective ones, by setting the action as the view URL. The rest is similar to create a view with a single form:

That’s it.

Now, for 2) we just need to find a way to identify, in the view, which form has triggered the POST request. There are multiple ways to do this. I chose to use a trick I found out there, it is to create a name attribute in the submit button and identify the form there. The template should have something like:

Notice that both forms point to the same view {% url ‘front-page’ %} and the submit inputs have different names.

In the view we just need to find which name is in the POST request, like:

Now the front page works with both the login and register forms. We should create some tests to guarantee the view works properly.

There are multiple tutorials about setting a test environment for Django, then I’ll focus only in how to test the right form in a view which has two. There’s no big secret, the snippet bellow speaks for itself:

Look at the POST request in test_user_login():

self.client.post(‘/’, {login_form: True, username: Testuser1, password: senha8dg},)

It makes a POST request to the URL ‘/’ (front page) and has the key ‘login_form’ in the data it passes to the view. This acts similarly to clicking the login button. Testing the register form is similar, see test_user_register_creates_user().

That’s it, we can test the front page now ūüôā

Furthermore, there are two challenge tests in the snippet above, they initially failed when I built the first version and I found fixing them a good learning experience, which I recommend.

I might make a post for them, maybe, the future will tell… See you!

Advertisements

Bootstrap 4: Delete confirmation modal for list of items

Hello again!

A couple days ago I wanted to use a modal to confirm the deletion of an item in a list. Like this:

delete_confirmation_modal
Confirm delete modal

The requirement were: 1) have a single modal in the page and 2) make a reusable modal, since it’s possible to delete items from other views too.

When there is a single element assigned to the modal, we can use something similar to a logout confirmation modal. However, when there are multiple elements in a list, this isn’t enough. It’s necessary to track which element last triggered the modal for deleting the right one.

To do this, we can add an attribute to the modal, let’s call it caller-id, and assign to it the id of the element which last called the modal. When the confirm button is clicked, it redirects to the href of the caller element, which should be the URL to delete it.

In order to meet the second requirement, we should write the modal in a separated HTML file, which can be included in any desired page. The one I used was:

And, to associate a button to the modal, we can use something similar to:

I’m using Django 2.1, and the snippet above shows in the first line how to include the confirm_delete_modal to the template. My app is named board, you should replace it with the right path for the file.

Line 3 shows how to associate a tag to the modal. In its original template, I iterate in a list of questions, where the current question is stored in the object question.

Note that every element associated to the modal must have a different id, and, in this case, have the class confirm-delete.

The last step is to perform the actions using JavaScript. I used jQuery in this example:

The first block adds an event handler for the click event in elements of the class confirm-delete. When the click happens, it writes the id of the element in the caller-id of the modal.

The second block adds a handler for the click event of the confirmation button inside the modal. It finds the element which toggled the modal, by the called-id, and redirect the page to the its href, the delete URL in this case.

That’s it, now we have a generic confirm deletion modal. It can be reused in other views by including its HTML to the template, and adding the confirm-delete class to the buttons which should trigger it.

By writing this post I realized that I didn’t really need the class confirm-delete, using the selector data-target=#confirmDeleteModalinstead seems simply better. I’ll try this way next time :3

Bootstrap 4: Trigger page redirect after modal is hidden

Hello, long time no see… I’m learning the basics about web development and am building a site to use as my workshop. Some things I try to do appear to be very common but there isn’t an easy-to-find thread on stack overflow or blog post about it, the information is there but is scattered.

Executing a redirect after hiding a modal is one of these things and I’m here to show a direct approach for doing it using Bootstrap 4.1. Also, I’m using Django 2.1.

I needed it to create this logout flow: 1) click the logout button, 2) a modal shows the log out confirmation and 3) the user is redirected to the logout URL.

Since I’m using Django’s builtin registration views [1], the log out is done by simply redirecting the user to the {% url ‘logout’ %}. That URL will log the user out and render a specific template, which, in this case, I left with a single script to redirect to the login page.

Okay, the first step is to add the modal to the template, put it anywhere inside the <body> of the page. I’m using a simple Bootstrap 4 modal [2]:

And the custom CSS:

The second step is to show this modal when a button is clicked. I’m using a dropdown item [3] as a button inside a navigation bar [4]. The tag of this item is:

It toggles the modal with id=”logoutModal” as you can see in the data-toggle and data-target properties. This is enough to show the logout confirmation when its clicked.

You can also see that this item has a href that points to the correct logout view. However, this isn’t automatically triggered along with showing the modal. I think Bootstrap overrides this behavior but let’s keep the href there so we can use it in the next step.

To accomplish the desired redirect I needed to use javascript (jQuery). The idea is to bind an event handler for when the modal is hidden, like this:

This triggers the redirect whenever the modal is hidden. Notice that in the function I get the URL from the href property of the logout button. We could write the URL directly to window.location, but getting it from the href puts all the important information in the same place, the tag which toggles the modal.

Beware, you’ll find many people out there suggesting the following:

That won’t work in this case because the modal is initially hidden, then jQuery won’t bind the handler to it. You should bind the handler to something visible, document for instance.

Now, when the logout button is clicked, the modal already shows and expects the user to click anywhere in the screen for it to disappear and trigger the redirect. Additionally, I’m using the following event handler to hide the modal 5 seconds after its shown:

The final step is to redirect to the login page after the logout. I use the following looged_out template:

The logout flow is all set now, the result will look similar to:

redirect_after_modal
Trigger redirect after modal fades

In this flow the confirmation is shown before the actual logout. In some corner cases the logout might fail even after the confirmation that it worked, which is a problem. However, it works most of the time and I think the effect looks nice :3

To fix this I’ll probably have to use a custom logout view but this isn’t worth the effort for now, the site’s core feature isn’t working yet, and it is a bigger priority.

Also, I’m sure there are better ways to organize the abstract links between HTML and JS, my initial impression is that there aren’t many organization rules in web development. Hope to develop this intuition with practice.

That’s it, I’ll try to post everything that I find relevant in building this site, then I should be back soon ūüôā

References:

[1] Django Authentication

[2] Bootstrap Modal

[3] Bootstrap Dropdown

[4] Bootstrap Navbar

Visual Studio: Put git hash in version

Hello again! Recently I faced the problem that I needed to recover the code, which generated a dll, only by looking at the dll itself.

The project had many independent contributors, and was deployed to a few different environments, what triggered a few versions from local branches.

Version numbers alone didn’t solve it, but let’s talk about them first.

Versioning is a common problem in software development, however, there isn’t a consensus about how to properly version a project. There are some guidelines, and a lot of discussion out there, but in the end, the team should choose what works best for them.

I like to use three numbers, major.minor.revision, starting at 1.0.0 it progresses like that:

  • Increase major version whenever the changes aren’t backwards compatible. Usually those are great changes and this number shouldn’t be increased often.
  • Increase minor version whenever you add a new feature which is backwards compatible.
  • Increase revision version after minor changes, like bug fixes, organization commits etc.

I find those three numbers enough, along with the trick I’ll show you next, but a fourth number at right might be useful:

  • Local version, it means how many local changes, not pushed commits for instance, were made.

Back to the initial problem, from its many solutions, I found best to input git info into the dll version, and make it automatic, then there is no chance to forget doing it. The trick was to use hooks.

First I tried git hooks, it didn’t work, but let’s take a look at them anyway :3

Git hooks are shell scripts that are¬†hooked¬†to an action, they are executed whenever the action is triggered, before or after it, depending on the hook. Those scripts must be put in the folder¬†‘ProjectFolder/.git/hooks’, you can see a few samples from git in that folder already. To activate a hook just remove the .sample¬†extension and it’s ready.

The idea was to write the git hash in the version file right after a commit and amend the changes, that’s it, only a post-commit hook needed.

Little did I know that there is no amend, there is only removing the last commit, adding the new changes and making a new one, with a different hash, then the hash number in the version is meaningless.

Luckily there are also build hooks, the same principle of git hooks applied to the building process, then the solution was to write the git hash in the version file right before building the dll. I used Visual Studio 2013 and C# for this but it should apply to other tools as well. (This one works)

Actually, I preferred to create an additional version file, containing only the git info. Its possible to overwrite the standard version file but I didn’t want to unload the project for every version change. Probably this can also be avoided but I didn’t find an easy way :3

Visual Studio offers a visual interface to define hooks, go to Project Properties -> Build Events and you can see the text boxes for Pre-build and Post-build events. As far as I know those commands will be executed in the Windows PowerShell at the right time. You can also define the hooks directly in the csproj file, which I preferred.

This file is a XML file, where the propertie DefaultTargets, of the Project tag, register the hooks, having Build as the main build event. The events are executed in the same order they appear. Take a look:


You can see that the Version hook, which creates the version file, is executed right before building the project. After building the Clean hook, which deletes the version file, is executed.

Ok, now we just need to register the hooks, luckily its very easy, just create a Target tag, right under the Project tag, with the hook name as a property, like this:


The only missing piece is the git hash, to add it into the version I used the package MSBuildTasks, its available via NuGet. We just need to install it and add the following tag in the csproj file:


Right under the tag:


Beware of the MSBuildTasks version, check which one is installed in the packages folder inside the project folder.

With MSBuildTasks you can use the GitVersion tag, which defines a few environment variables, being one of them the git hash. Since a code snippet is worth a thousand words:


You can see that the git directory is defined and that GitVersion outputs the parameter¬†CommitHash¬†from its inner property with the same name. Right after, this parameter is used in the¬†AssemblyInformationalVersion¬†as “git hash – $(CommitHash)”.

The whole csproj file will look similar to this:


...

The result is that after building the project, the output file property “Product Version” contains the git hash.

Hope this can be useful to someone ūüôā

IPtables-translate, JSON and misfortune

In these past weeks I started with an iptables translation.

We know that nftables is here to replace iptables, so it is natural that many of its users have their preferred rules set with iptables, and would appreciate to have an easy way to set a similar ruleset using nftables, for this iptables-translate is provided.

Using iptables-translate is very simple, you just need to write your iptables rule and it outputs a similar rule in nftables if the rule is supported, if not then the iptables rule will be printed. An usage example:

$ iptables-translate -A OUTPUT -m tcp -p tcp –dport 443 -m hashlimit –hashlimit-above 20kb/s –hashlimit-burst 1mb –hashlimit-mode dstip –hashlimit-name https –hashlimit-dstmask 24 -m state –state NEW -j DROP

Translates to:

nft add rule ip filter OUTPUT tcp dport 443 flow table https { ip daddr and 255.255.255.0 timeout 60s limit rate over 20 kbytes/second  burst 1 mbytes} ct state new counter drop

The above example comes from the module I wrote the translation to, hashlimit, it’s similar to flow tables in nftables. Each module is translated separately and the code is in its iptables source file, much of the supported features have their translation written but some still need some work. Writing them is an actual nftables task in this round, future interns, go and check the xlate functions in the iptables files, it can be of great help to the community and to yourself ūüôā

After this task I looked into the JSON exportation of nftables ruleset, in the future importing a ruleset via JSON should also be possible, but for now only exporting is. This feature is still being defined and many changes are happening. What I did was to complement a patch to define some standard functions and use them to export rules. JSON in nftables is a little messy, probably it will get more attention soon.

Now about misfortune, last week an accident happened and my notebook is no longer working, I’m trying to have it fixed but it stalled my contribution with patches. Hopefully next week this will be sorted and I can finish some patches.

I’ll probably write a new post about my experience with Outreachy soon, now it is late and I need to go home :), see you.

Documentation weeks

nftables has two main documentation sources:

  • nftables wiki, the wiki provides an example oriented documentation, so the user can see how the features are useful in practice. Usually the wiki also states which Kernel and nft versions are needed for each feature. Also, since many nftables users come from iptables, it is useful to compare a feature to the one it replaces in iptables.
  • nft manpages, the manpages are directed to users who have some experience with the software, usually the grammar of a feature is displayed and the existent values for each component listed, along with a short description.

These past two weeks were all about documenting parts I helped implementing and others which I didn’t. Providing a good documentation is tricky, you should put yourself in the user shoes and write what’s relevant to them.

I have a feeling that documenting a feature you didn’t work leads to better results, since you don’t need to make an effort to visualize the system as an unexperienced user does. However, it is a lot harder, when you are writing references for a feature it usually means you can’t find other references except on git log and the code itself.

It feels similar to hunting bugs, actually odds are you find some in the process, or at least some unexpected behavior. I found a few places I thought worthy of improvements but this thought didn’t ripen, the reason being it provides less benefits than loss to fix them. In these past two weeks I’ve seen this a few times, after some thinking and tracking the code changes I’d see they are planned behaviors, using git blame and git log you can track the reason for the changes and often they’re a trade, an undesired behavior is allowed (when it doesn’t brake things) to avoid code duplication or too much complexity. Guess I should change my mindset to optimizing for simplicity and code maintenance.

Even though most of the “bugs” weren’t real bugs, I think I found one that really is and will try to fix it for now, see you.

Bugs solving week

There is only one week since my last post, so this is a short one.

Last week was focused on searching bugs and solving the ones I’m able to, some of them were suggested by my mentor and others I tried to choose by myself, wasn’t very lucky with those.

A good(?) thing about bugs is that they happen in every part of the system and you must chase them wherever they are, including places you’re not comfortable in.

For example, one of the bugs was a dependency issue, usually the building process follows this flow (when autoconf is used):

sh autogen.sh    (1)
./configure          (2)
make                   (3)
make install       (4)

There is usually a file named configure.ac, which contains system specifications and dependencies; this file is used in (1) to generate a configure script, which by its turn will be used in (2) to create the Makefile, needed in (3) to compile your files together. Finally (4) puts the resulting file in a appropriate place and the program is ready to be executed.

It’s expected that if ./configure finishes without errors then make and make install also will, however, that wasn’t always the case in nftables. To solve this bug I just needed to change the dependencies in configure.ac, the fix patch is a boring oneliner, the fun is in reproducing the bug and testing it.

To check the version of a dependency, configure.ac uses PKG_CHECK_MODULES(), this macro searches the dependency in some specific folders (read man pkg-config). It’s up to the developers to provide a .pc file when the software is installed, so pkg-config can find it; sometimes they don’t and you have two options, search for a source which does or write this file yourself, see what xtables.pc looks like:

prefix=/usr/local
exec_prefix=${prefix}
libdir=${exec_prefix}/lib
xtlibdir=${exec_prefix}/lib/xtables
includedir=${prefix}/include

Name:           xtables
Description:    Shared Xtables code for extensions and iproute2
Version:        1.6.1
Cflags:         -I${includedir}
Libs:           -L${libdir} -lxtables
Libs.private:   -ldl

Also, sometimes you upgrade or downgrade a library and the .pc file isn’t updated, what misleads your configure script and may cause unwanted behavior, be careful about it.

Other bugs were less interesting, two of them were only a table presentation fix and the last one I couldn’t reproduce, even after a lot of code digging and configuration changes, apparently it vanished somehow within the updates – and not much information was given, what makes reproducing it harder.

I’m still working on one, actually it’s a request for a new small feature for the parser, will enter in details later when I have some conclusion about it. See you.