Sometimes I write the answer meaning/pronunciation and get a green “success”, but I get a pop up that I had a typo in the meaning (hello there “here/there” ) or that I used the non-standard pronunciation. But my brain seeing the “green success” instantaneously makes me push enter. And in some of these cases I want to review the item meaning/definition, but I’m too lazy to search up the item… Or I think I will do it later, but forget…
Could you maybe keep the last item in the next screen, but on the side? Either with the meaning/pronunciation by hovering over, or just as a link to the database - opening a new tab with the definition?
I know this sounds like a lazy persons problem, but when you have a big review pile it can be tedious to open a new tab with hanzihero and then search for the definition. And I do believe you also fit our stereotype of being lazy developers . (I’ve been so lazy that I’ve used hours to automate tedious stuff, like most of you probably )
I’ve also previously done "ctrl + " . Leading to my current review window going to the dashboard - disrupting open/half done reviews - which is no fun…
Thanks for the feedback! We actually had one other user submit a similar request. We should have a feature out in the next month or so allowing someone to easily go back to the previous item to review for this specific scenario of clicking “next” to soon. I’ve also hit this problem, and am excited for us to ship this improvement.
I’m often in this situation where I see a character e.g. 相 in a review and do a double take later when I see a similar one like 租. I wish I could just scroll back to see the review history. Minimally contrastive pairs like that really help me nail down the details.
Yes, somehow comparing characters that are very similar would be nice. Not in review as it might “help you” if you have a similar character later. But some characters like 巳 and 巴 i mix up And it would be helpful if you open one of their info pages you would also see the other similar ones and be able to see what distinguishes them.
Yeah this is something we sort of have implemented, but have incomplete data for. Right now it requires someone (i.e., me) to go through and manually input all visual similarities. Meaning similarities we do via basic overlap of any of the meanings between characters/components already. One thing I thought about was to handle character-similarity at least by making all characters that are only have one component differ as “similar”, but I think that would have some false positives.
Of course, it does seem like something some sort of ML model could help us with, but I couldn’t find any prebuilt one.
Lastly, I believe Wanikani has similar data sourced from an open dataset of visually similar kanji, but I could not find any similar open dataset for similar hanzi. I could probably adapt that dataset to get this data for a good subset of our characters, now that I think about it.
In short, this is something we will get around to improving. If anyone finds any data/algorithms that could help with the process, let us know.