INSTABRICK is a part identification system that was successfully crowdfunded in 2019. Units were shipped to backers early in 2020 and it can now be purchased by anyone.
We did some tests with a prototype unit (when it was called PIQABRICK) a while ago and we’ve now been sent a final production version to evaluate. As it’s not something I’d get a lot of use out of, I sent it to Martin, aka CCC, an expert in part identification. He has conducted some very thorough tests to determine whether it actually works and if it’s worth spending €149 on it:
When Huw said that he had an INSTABRICK to review I jumped at the chance. I have been a BrickLink user for about 10 years now and consider myself very good to expert when it comes to searching for previously unidentified parts, especially printed parts and minifigure parts.
So the question of whether a machine is better (both in terms of speed and accuracy) than a human eye is clearly of interest to me. INSTABRICK’s webpage claims that it’ll identify any brick in a blank of an eye, so we’ll be testing whether this can be done.
Inside the box
Before we run the tests, let’s look at what you get. In the pack there is an INSTABRICK Top, a USB type-C cable, a piece of off-white greyish card and another piece of card with minimal instructions and a QR code. Further instructions are available after you register and log in. The software is all online, run through a browser. The instructions also indicate that you need to sit the INSTABRICK top on a 16×16 structure built with three walls, 11 bricks high. We’ll come back to those walls and the height later on.
The top itself has seen some changes from those shown during the crowdfunding campaigns. First of all, the camera has been changed from a 5MP autofocus to a 3MP fixed focus (although their website still incorrectly claims it is 5MP).
Looking online, I notice there had been some complaints about this, but this seems reasonable to me. For anyone that has sat in a Zoom meeting where someone’s autofocus webcam has lost focus and continually scans without finding focus, the fixed focus lens seems a good idea especially as the distance between the camera and the objects in the box is fixed.
The 3MP camera also seems to have good enough resolution to work fine for this application. Another obvious change is that the LED lights are now in a circle and pointing downwards rather than on the edges and pointing across. They also shine through a translucent plastic diffuser, presumably to spread the light more evenly across the box.
For my first box, I went with what looks like an albino hedgehog, constructed using mainly very cheap white 1×2 bricks with a pin on the side. Once constructed you put the top on your box, plug in the USB-C cable and connect to your PC/Mac and you are ready to go. You need to register an account on the website and put in the card with the QR code on it, then scan it and the registration process completes.
You then replace the QR card with the off-white greyish card that provides the correct background for the images. It does not work properly if you do not do this. You then put a part into the box, hit scan and it does its magic. Once the box is built, it is up and running within a few minutes. Probably 95% of the total set-up time is spent building the box for it.
Another change from the crowdfunding stage is that it only works with a PC/Mac and does not work with a smartphone or tablet as originally claimed. Luckily, I have a PC not too far from my build / play area and another near my BrickLink storage, although I always use a tablet when it comes to picking orders. This does seem to be a bit of a downside compared to the originally advertised spec but not too bad to overcome.
On to the tests. Each of my tests will have a number of images associated with it. These are snapshots of the browser window and show what the user sees after scanning the part. Let’s start with some torsos. Given the Italian origin of INSTABRICK, the first one to try is an obvious choice. Here we have the Vitruvian Man torso that was available in the Build-a-Minifigure stations in LEGO stores.
When aligned inside the device, with the arms neatly by the sides, it identifies it with a perfect match. This is a close-up of what is contained in the information box:
Let’s make it a bit harder on the next few tries by not being so careful …
Clearly, at least in this case, having the torso less well aligned inside the box, or raising one or both arms, or pulling one arm off or even flipping it over all lead to positive results. There are occasionally other parts that it thinks might match, but it gets the correct one each time. Excellent! The database has enough images to work out what we have no matter how it is put into the device.
Let’s try another one, this time J.B. Watt’s torso from Hidden Side, but we’ll also include some other bits this time to see if it gets confused.
In each case, it recognises the torso and also shows a possible match to the figure (although whoever entered the name was a bit lazy), even when a completely different head is on the torso. However, it does not recognise the head when imaged by itself.
Let’s try to confuse it further. The next torso should have dark bluish grey arms, let’s replace them with light bluish grey instead. PS. I am a Bricklink user, so I use their naming conventions.
Again, the results are very good. It found the correct torso (without arms) as the top ranked match. The torso assembly (with DBG arms) was found at position 6 on the list so there was clearly a lowering of the match due to the wrong colour arms. The parts/figures in positions 2-5 were wrong, but at least the torso in the first place is a good start to identifying what we have. Another success.
Let’s give it some more torsos from across a range of themes…
The torsos of Samwise Gamgee (even with incorrect white hands!), Scuba Robin and Obi Wan were correctly identified. The surgeon’s torso was not identified so is presumably missing in the database, but the full figure was identified, so at least that is a partial success and gives a good indication of what we have. I found this was often the case for CMF parts.
However, the torsos of Luke, Leia, Cinderella and Elrond were not identified. Although I have shown a number of successes so far, failures are more common. I tried out over 100 different torsos and about 25% were correctly identified and for another 10% or so the character they came from was identified even if the torso was not. The hit rate for Collectable Minifigures was particularly high.
It does quite well on torsos when they are in the database but then torso assemblies are quite quick and easy to manually search for at BrickLink (as long as they are not modified by switching parts) just by using colours and maybe a couple of descriptors. For example, the last one above (Elrond’s) is pearl gold, has pearl gold arms and light nougat hands, so a BrickLink search for pearl gold torso “pearl gold arms” “light nougat hands” (note the careful use of quotes) cuts down the number of torsos to search through to just 7 as shown below! It is easy to do such a search in under 30 seconds on BrickLink.
However, heads are much harder to search for at BrickLink as they typically only have a single base colour, and the colours of print are often not so easy to identify and coming up with useful search terms is more difficult. So this is where the INSTABRICK might come into its own by massively speeding up searches for random heads.
Unfortunately, this wasn’t the case. I tried 40 different heads and only got 5 positive matches. Few of them seem to be in the database.
Now for some older used 1980’s figures, the sort of thing that frequently come from bulk used lots and are often missing pieces or have had parts swapped out or have been drawn on or prints rubbed off.
The red Forestman was found but the two blue ones were not. The peasant failed although similar prints were found, even though in very different colours, which might help track down what we have here. Note also one of the problems with BrickLink searches in the part names – inconsistent use of terms. The minifigure has a “pouch” whereas the torso has a “purse” even though it is the same design.
The Crusader was obviously rather unsuccessful, showing minidolls that do not look anything like it in either shape or colour. It frequently makes very bad suggestions like this which can be quite frustrating or funny depending on your mood. The threshold for showing possible matches seems to be way too low.
What about the old ghost shroud.
This highlights something else that happens quite frequently. It did not identify it when aligned, or when rotated. But when moved very slightly (between the second and third images), it does suggest incorrect matches, although it does find the newer shroud this time. This indicates that the suggestions if it doesn’t have the match tend to be somewhat random. Why suggest the shroud for image 3 and not for 1 or 2? This seems somewhat inconsistent.
Let’s go for possibly the most beautifully detailed minifigure ever produced: Theoden, from The Lord of the Rings. And what happens if he loses parts, as might happen in a played collection?
We get positive matches for the full figure, missing the helmet and/or armour, and get a match for the torso if the legs are missing. It failed when flipped over and also the head alone did not produce a match. It seems that if we have most parts of a specific figure, it produces good results if the figure is in the database with a decent number of images to work from, and we are sensible enough to put it the right way up. Again, excellent results.
Let’s now go for some custom figures to see what happens as they will certainly not be in the database. Sticking with The Lord of the Rings, here is my custom Eowyn.
It correctly matched both the torso to the Leia figure and the head. A very good result!
What about a more extreme custom, this time Gandalf. Note the hair and beard were originally genuine LEGO parts but have been cut and combined together into a single part.
This time no match at all, but the torso is recognised correctly when the head and the custom hair/beard combined part is removed.
But wait a minute, what else do we see there? It is a blue Forestman. He is in the database after all, but my two blue Forestmen were not correctly identified earlier! That is disappointing.
Here is a similar thing happening again. Green Lantern’s head
What does it think it is? Theoden’s head, even though it didn’t identify Theoden’s head earlier on. Again, disappointing.
One thing that I often read during the crowdfunding stage was that it would tell the difference between old and new greys, between the different browns and so on. Let’s test that out, starting with an old light grey 1×2 brick.
Well, that is not a good start. Let’s change position and orientation.
Here are 10 attempts at finding it (I actually tried it 20 times), moving the brick around. Note that it is actually in the database, as it appears in the 10th image. It appeared once in 20 tries, and when it did appear, it appeared after the coral one, suggesting coral is a better match. This is probably an indication that even though it has been added to the database, there are not enough images of it. Notice also some of the suggestions, these are frequently useless when there is no good match, but there is no confidence score shown for the matches.
Let’s go for an old light grey arch instead.
The results are just as bad. The first time it thought it was a 1950’s black car, the second time a white one! It did get the shape right second time but look at the colour. It thought it was light bluish grey and not old light grey, so a fail on colour recognition. Maybe the old light grey is not in the database and this was the best match when placed in that orientation (but not found at all on the first try).
It also failed to identify a number of other parts, both old and modern, in similar colours. Even if the part was correct, the colour is often wrong. The greys and browns and to some extent blues and greens produced particularly bad results.
Let’s give it another try with an old light grey panel / wall piece.
A black bear, a white bear, Mickey Mouse or a minifigure cape! This sort of result is not uncommon.
What about some more common parts?
It is meant to be able to tell plates from bricks by the length of the shadows, but it could not identify a 2×3 tan plate. It failed on a black 2×4 brick, returning a reddish brown one. It failed on a reddish brown 1×4 log brick, returning the right brick but in dark brown in one case or printed bricks (including a 1×3) on another try. It also failed on many other common parts. Maybe people don’t need to search for such common bricks, but the database seems to be severely lacking here, especially if the intent is to identify any brick.
We’ll end the tests on a fan favourite, Nick Bluetooth. He doesn’t fit in the box, but his head does.
Results: a forklift or a DUPLO teapot!
When it works, it works very well. A major problem though is that it doesn’t work very often. We’ll come back to this later on.
Less than optimal conditions
During the crowdfunding campaigns, a number of people asked why this was not done as a phone app and I recall that the reply was that their research did not give good results due to two reasons: the distance of the parts from the camera needs to be consistent to get the scale right, and the lighting needs to be consistent. So why not test this out. To test the distance, I built the three walled box as instructed, but varied the height away from 11 bricks. Here are the results, starting at 8 bricks high through to 14 bricks high. Obviously, the part appears to get smaller as the height of the box increases.
The system got the right identification for heights between 9 and 13, failing for 8 and 14. There is a reasonably large size difference for the 1×8 tile image at 9 high and 13 high. I got similar results for other parts that are known in the database, tending to fail if too close or too far, but with a reasonable tolerance of about +/- 20%, suggesting that as long as you know what scale to aim for, distance should not be a problem for identification purposes.
What about consistency of light? The first thing I did was remove the walls of the three sided box, which presumably are meant to stop outside light straying into the box. Instead, I built four 1×1 pillars, 11 bricks high. This was less secure and the top fell off if one of the pillars was knocked so I wouldn’t recommend it for stability reasons.
However, for known parts, I got exactly the same results as when there were three walls so any stray lighting was having a minimal effect. This might be because I was doing this in a fairly dim room away from any strong lights.
So what about more extreme lighting conditions? I got my very bright bicycle front light and placed it just out of shot to cause quite extreme shadows and very poor, inconsistent lighting and again tested parts that I know are in the database.
Perfect results again, despite a very bright light source causing extreme shadows especially when the arms are raised.
Of course, images taken for inclusion in the database should be based on both optimal distance and light conditions so they come from a consistent standard. However, for identification purposes, rather extreme conditions still produce matches if the parts are known. And if they are not known, it rarely produces a sensible suggestion anyway under optimal conditions.
No doubt there are other issues with having a phone app instead, such as compatibility on various operating systems, through to pricing/charging for the app. I imagine it would be quite expensive given that when you buy the package it is not just the camera and light top, but access to the software and database, and it is the latter that really makes the system usable.
Does it work?
The all important question is does the system work? I think that has to be answered in three stages, as there are essentially three components working together: the top, the technology/software and the database.
The INSTABRICK top is well-built, it feels really quite sturdy and able to be bashed about a bit, appears to be quality components inside that are all more than adequate for the job and even feels quite tactile with its nice smooth rounded corners. It is perfectly sized to fit with a 16×16 base and doesn’t move or wobble when put on top of the walls as instructed.
If you go with 1×1 pillars in each corner instead of walls, it can be a bit wobbly but of course that is not recommended. The lighting and distance it provides is of course optimal when built according to the instructions. It is possible that the camera is very slightly off-centre – you might just be able to see the base of my wall on the right-hand side of all my images – but this does not affect the performance. The top is a definite positive and a quality bit of kit.
The technology behind the recognition is clearly working, if the parts have enough photos for recognition in the database. However, there are some issues with the software. The first is that although there are help pages, they are not necessarily that helpful. For example, I could not find anything on detailing the difference between a quick scan or a deep scan, and in fact they just refer to a scan in the help.
There is no indication as to when to use one or the other, or why doesn’t it do a deep scan if the confidence scores after a quick scan are low. Another issue I have is the number of matches shown and the information shown for matches. There is no indication as to the score of a match. In earlier descriptions, it indicated that there would be some sort of score shown but this is not present. Especially given some of the very random matches it finds, it would be nice to know what the scores are.
The other issue here is the unnecessary data shown. When I want to find matches, I want to see the parts it thinks are matches so I can quickly scan then by eye. However, these are shown as quite small images and only two on the screen at once. Larger images would be much more useful, especially if you are trying to tell the differences between minifigures with very slight differences, for example.
A lot of the important information such as the DesignID / BrickLink part number could go to the side. There is a lot of wasted space on the screen and a significant amount of this is down to showing the details and a photo or logo of the person that originally submitted that part to the database. While I don’t mind that information being recorded somewhere, there is no need to keep seeing this when I am searching for matches. It is totally unnecessary, especially if it means I can see fewer matches on the page due to so much wasted space.
The final issue is the time taken. The tagline is to identify any brick in a blink of an eye. We’ll leave the “any brick” part until the next paragraph and concentrate on the “blink of an eye”. I was finding that when I got a match, it typically took 9-14 seconds. Where there was no match, it could be as long as 25-30 seconds on a quick scan and even longer on a deep scan. While 9-14 seconds is fast, this is not as fast as a blink of an eye would suggest and not all that much faster than someone that knows how to search at BrickLink although clearly if you have 100s of parts to identify this would represent a good overall speed up, especially for anyone with little to no experience of identifying parts or knowledge of parts. The technology is a positive, the software is acceptable but could be better.
Now the third component and probably the most important given the other two work well – the database. This is really what makes the device survive or fail, and unfortunately it is (currently) a fail. Looking back at past claims, it was mentioned that the creators would have 90% of parts and minifigures in the database by the time of launch (originally Feb 2020, pushed back to December 2020). Well, it is now the start of April 2021, about 3-4 months after the delayed launch and these are the database statistics (note this data is only available after buying and registering):
The green bar (and smiley face) represents the percentage of parts where there are enough photos for the software to identify matches well, the orange bar (and neutral face) where the part has been added and occasionally matches are found but they require more photos to get reliable results and the grey bar where the parts have not been added.
This falls a long way short of the 90% they claimed for the time of release. It is fairly obvious why there are so many parts that do not get matches. They are either not in the database or even when they are in the database (remember it did not identify Theoden’s head or the blue Forestman or the light grey 1×2 brick even though they are in the database), then more likely than not there are not enough images for the AI to work and multiple attempts are necessary moving the part around in the hope of finding a match unless you give up. I don’t think the product is market ready since to work it requires all three components – the physical product, the software and the database – to be ready and one of them is not.
It seems that rather than getting the database 90% ready for the release date as claimed during the crowdfunding period, they are now relying on crowdsourcing the necessary data from paid-up users after release instead. There is a monthly competition and whoever enters the most data wins a LEGO set (42107 Ducati Panigale V4 R for March). To me, there is little incentive to work on supplying data with that model as it is all or nothing each month (although they do say there will be runners-up prizes for April) especially if you are up against someone with lots of time to spare.
Whereas if you could continually add data at your own pace and cash in points for supplying the data for different rewards whenever you feel like it, then there might be more incentive for more people to help populate the database. This is not BrickLink where there is no financial cost to enter parts into the database. You must have already bought into the system to be able to supply photos to the database, so the crowd is very small.
Another important issue that has apparently skewed the database is the “ownership” of the parts. You “own” the part if you are the person that supplies the first picture for that part to the database. If you “own” the part, then your name and photo or logo get shown on search screen when that part is a suggested match.
You also get substantially more prize points for the first photo and linking it to the part than for adding any subsequent photos that are necessary for the matching algorithms to work efficiently. This seems to have led to a race to “own” the parts in the database without submitting enough photos for the algorithms to work properly. This can be seen by the size of those green and orange bars.
For every three parts added, roughly only one has enough photos. There is even a prominent league table when you log on for who “owns” the most parts. Maybe they need to change the process so that you only “own” the part once there is enough data to create reliable matches, rather than just adding it to the database without sufficient data. Otherwise, it seems most parts will be added with insufficient data when relying on crowdsourcing.
There is another issue of sloppiness and inconsistency in the part names. Existing parts seem to have been taken from an old version of BrickLink’s database but have lost the capitalization (for example, king theoden and not King Theoden) and also appear with outdated colour names in some cases (for example, one of the bears has medium flesh in the name rather than medium nougat, this change was made back in Feb 2020 at BrickLink). Newer (and presumably future) parts appear to have no link at all, for example, the Hidden Side partial figure I scanned is called “J.B. Watt (Large smile / annoyed)” on BrickLink, but here is just “j.b.”. Inconsistencies in naming will make comparisons difficult, especially when there are multiple variations of characters.
Is it worth it?
The final question is of course is it worth it (the price is 149 Euro). In the current status, the answer is almost certainly not as the database is too poorly populated to be of any use, with less than 15% of parts/minifigures reliably identified when scanned going up to about 35% of parts/minifigures known so might be matched if you are lucky or don’t mind repeatedly scanning moving it slightly each time.
Let’s assume that the database does get close to fully populated. Then who would use it? It is fun to use at least for a few hours, as the software magically identifies on screen what you put into the box (at least when it works). Therefore, people that like playing with this sort of technology would probably get both use and some fun out of it, even if they don’t really need it. I cannot really see it being a valuable tool for collectors/builders who tend to know what parts they have and use frequently.
Similarly, it is probably not much use for BrickLink sellers that sell new parts as they have a list of the parts when parting out new sets and so identification is not necessary. However, I can imagine it would speed up identification of parts for BrickLink sellers that sell mainly used parts and buy mixed up collections.
Whether the cost is justifiable is another matter. I would expect that most of the larger, used part BrickLink sellers with experience should be able to identify at least 90% of a typical mixed up box of LEGO parts within a few seconds per part or, for the majority of the remainder, to be able to find them quickly using a search at BrickLink.
There are often unknown printed parts or minifigures where parts have been exchanged that are difficult to identify and this device may help speed up the identification and processing of such parts. Although any seller will need to be turning over a very large number of parts to justify the 149 Euro cost of the device for identifying maybe just a few percent of the parts they handle.
I imagine it would be useful if someone has a lot of printed parts, and they are not very experienced at searching BrickLink, or they use inexperienced staff to identify parts, and especially where there are many similar designs on a single colour part such as for minifigure heads. Similarly, it may well be useful for someone that buys a lot of mixed up minifigures and needs to sort them out as at least there with the relatively high value of the minifigures it could be worth investing in the cost of the INSTABRICK.
That said, uploading a photo of unknown parts to the BrickLink forum or the Brickset forum will get them identified, typically within an hour or so and for free.
Right now though, I would not recommend buying it. It remains to be seen how fast the database will be populated, and it becomes more reliable.
Thanks to Instabrick for supplying the unit for test. All opinions expressed are my own.