* UI/UX: "Tool->GEGL Operation..." is too much friction for such a common operation- just pop it up when you click on the "FX" button in the layers window.
* UI/UX: Naming. Drop shadows and glow are currently not discoverable (its squirreled away in the generic "GEGL Styles").
* UI/UX: "Move Tool" should act like a common entry point to other tools if you're not dragging. Switch to "Transform Tool" if I single click an image layer. Switch to the "Text Tool" if I single click a Text layer! Please!
* UI/UX: Copying/pasting layer styles does not work. Users can overlook many issues if you can duplicate/destroy layer styles easily. Preset system is cumbersome. Idea: Presets usable from the Layers window directly (could be just add/apply presets) would help a lot, but just copy/paste would probably be better.
* BUG: Layers often clip GEGL Glow. Again could be worked around by just easy copy/paste of layer styles. See clipping present on "GIMP Halloween Party" text in my image.
UX was a field in the 1990s when it was at its height. We still have designers, but most software houses closed their academic UI/UX research houses and just hire people that make things that look attractive.
If you've recently tried to teach a computer illiterate person to do something, you'll know what I mean. No consistency, no internal logic or rules, just pretty stuff that you just need to know.
Windows 95, of all things, was actually a good example of a company doing 'proper' research driving UI work.
Btw, I loathe the term UX, because 'interface' (in UI) should already be a big enough term to not just mean pretty graphics, but the whole _interface_ between user and program. But such is the euphemism treadmill.
I remember studying and the difference was not only about interaction but a general impact on the user.
I always found MacOS Finder's spatial file placement a good example (Non-MacOS users - Finder has this thing where it remembers windows locations and file locations in window, so one can arrange files as they please and they stick). Given that that feature is removed the UI stays the same (there are file icons, some windows, same layout), but it does remove some of the cognitive load.
UX is impacted by many non-UI things: load times, responsivity to input, reliability (hello dreaded printer dialogs that promise to print, but never will).
Anti-pattern I hate with passion is MacOS update bar. I want to do some work in the morning, I open my computer and it's friggin' updating. This sucks, but happens, we got forced into this. And then there's this progress bar that jumps: 20%, 80%, 50%, 30%, 90%. Colleague asks when I'm going to be online - "oh, 10% left, probably soon" - ding - progress bar backs to 30%.
UI is the same from observers point of view (it shows the progress, which I suppose is correct and takes into consideration multiple update phases) but UX is dropping ball here.
OSX has had the striped progress bar for hard-to-estimate processes as long as I remember. Did they do away with it?
There are situations where I don't exactly care how far something has progressed but I want to see that it at least has not hung. Fedora's dnf doing SELinux autorelabeling for two hours without any indication of progress is one of those things I hate with passion.
There's still progress bar on update and there's timer (but not always).
The timer also jumps. Once I had ~40 minutes update that was hope-feeding me with "2 minutes left" for most of the time.
My guess is that it's not worth optimizing, but nowadays I shy from updates if I don't have 2h of time buffer (not because I am afraid something will break, but because I know I'll be locked out).
Their update timer always start from 29 minutes remaining and goes from there, IIRC, and I find that it's way more accurate than Windows' one on 99% of the time.
Funnily, Linux (KDE) got very good at their estimations for some time now. Better behaving storage also has a role, I presume.
The only real indication of progress being made is a log output of steps completed. All a spinner or similar indicator tells you is that the UI thread is not hung but that isn't really useful information.
In the 90's I had this idea that in the future steps completed would be confirmed to the server so that the progress can be calculated for other users. Like, on system A downloading step 1 takes 1 minute and step 2 takes 3 minutes. If on system B step 1 takes 1.5 minutes step 2 should take 4.5. Do the same for stuff that requires processing.
But we apparently chose to make things complicated in other ways.
Obviously in the ideal case indicator animation would be tied to something else, like background process sending output, and there would be a textual description next to it.
I think on earlier windows (95 maybe?) opening a folder would also always open in a new explorer window so you had the impression that the window is actually the folder you are opening. Whereas today we're more used to the browsing metaphor where the window "chrome" is separate from the content.
I also don't think today it's useful to have the spatial metaphor, but it probably made more sense back then.
A window is a program window not an actual window. A folder is not the same as a folder in a filing cabinet and a save icon is a save icon not a floppy disk.they dont have to stand for or emulate physical things
Historically, it's both, which is how we got here.
The Xerox demo was definitely trying to make near-as-possible 1-to-1 correspondences because their entire approach was "discovery is easier if the UI abstractions are couched in known physical abstractions." UIs that hewed very closely to the original Xerox demo did things like pop open one window per folder (because when you open a folder and there's a folder inside, you still have the original folder).
As time went on and users became more comfortable with computerized abstractions in general, much of that fluff fell by the wayside. MacOSX system 7, for instance, would open one window per double-click by default; modern desktop MacOS opens a folder into the current window, with the option to command-double-click it to open it into its own... tab? (Thanks browsers; you made users comfortable enough with tabbed windows that this is a metaphor the file system browser can adopt!).
I had my folders themed on win 95. It is kinda hard to explain but the color schemes and images trigger a lot of mental background processes related to the stuff in the folder. Just seeing a green grid on a black background would load a cached version of the folder in my head and alt-tab into a linked mental process that would continue where I left it.
I think we need more visual cues for common operations to give more assurance and reinforce the action. For example, recently I was trying to back up some photos from an android phone by plugging it into a windows machine and copying files over. I already had an older version copied from before, and I was surprised that the copy action resulted in the same number of files after I selected "skip" in the dialogue. What happened was that I probably tried to copy from windows to android by mistake. With everything looking the same it's easy to miss things and have the wrong mental model of what is about to happen. It would be great to have more feedback for actions like this, maybe show the full paths, show the disk/device icons with a big fat arrow for the copying direction or something. Basically the copy/move dialog is the same for 10 files and 10,000 files, same for copying between devices and within the folder... and it will happily overwrite any files if you click the wrong option by mistake. And unlike trashing files I am not sure it's possible to undo the action.
"Experience" is more than just "interface". E.g. which actions are lightning-fast, and which are painfully slow is an important part of user experience, even if the UI is exactly the same. Performance and space limitations, things like limited / unlimited undo, interoperability with other software / supported data formats, etc are all important parts of UX that are not UI.
UI, where I stands for "interface" just like in HCI, used to mean all those things.
But in the industry the focus turned to aesthetics, so a new term was invented to differentiate between focusing on the entire interface ("experience") vs just the look.
Just like "design" encompasses all of it, but we add qualifiers to ensure it's not misunderstood for "pretty".
Thing is: changing the colours _could_ be improving the UX.
Eg I'm colourblind, and a careful revision of a colourscheme can make my life easier. (Though I would suggest also using other attributes to differentiate like size, placement, texture, saturation, brightness etc.)
That’s a good example for showing how “UI” and “UX” are essentially the same thing. At least in a practical context.
We can call an excellent story teller a “writer”. A good story can be described as “good writing”. A great story, let’s say a film being adapted as a book, can become a terrible book if it is “let down by the writing”.
In the context of books and storytelling, “writing” is the all-encompassing word that experts use to describe the whole thing. Just like “UI” used to mean the whole thing.
The thing with not well-defined names is that they're open to interpretation. To me, the difference between UX and UI is on a completely different axis.
When I was at university, I attended a UI class which - although in the CS department - was taught by a senior psychologist. Here, the premise was very much on how to design interfaces in such a way that the user can intuitively operate a system with minimal error. That is, the design should enable the user to work with the system optimally.
I only heard the term UX much later, and when I first became aware of it, it seemed to be much less about designing for use and more about designing for feel. That is, the user should walk away from a system saying "that was quite enjoyable".
And these two concepts are, of course, not entirely orthogonal. For instance, you can hardly enjoy using a system when you just don't seem to get the damn thing to do what you want. But they still have different focuses.
If I had to put in a nutshell how I conceptualize the two disciplines, it would be "UI: psychology; UX: graphics design".
And of course such a simplification will create an outcry if you're conceptualization is completely different. But that just takes us back to my very first sentence: not well-defined names are open to interpretation.
> Here, the premise was very much on how to design interfaces in such a way that the user can intuitively operate a system with minimal error.
Yes, that's a good default goal for most software, but not always appropriate.
Eg for safety critical equipment to be used only by trained professionals (think airplane controls or nuclear power plant controls) you'd put a lot more emphasis on 'minimal error' than on 'intuitive'.
We can also learn a lot from how games interact with their users. Most games want their interface to be a joy to use and easy to learn. So they are good example for what you normally want to do!
But for some select few having a clunky interface is part of the point. 'Her Story' might be an interesting example of that: the game has you searching through a video database, and it's only a game, because that search feature is horribly broken.
UX is just a weaselly sales term, "Our product is not some mere (sneers) interface, no, over here it is a whole experience, you want an experience don't you?"
It's just the euphemism treadmill. Just like people perennially come up with new technical terms for the not-so-smart that are meant to be just technically and inoffensive, and over time they always become offensive, so someone has to come up with new technical terms.
> 'Idiot' was formerly a technical term in legal and psychiatric contexts for some kinds of profound intellectual disability where the mental age is two years or less, and the person cannot guard themself against common physical dangers. The term was gradually replaced by 'profound mental retardation', which has since been replaced by other terms.[1] Along with terms like moron, imbecile, retard and cretin, its use to describe people with mental disabilities is considered archaic and offensive.[2]
I once upon a time coin the term scientific physics. UX is not progress, it is the astrology of UI design. The UI exists between the silicon and the wetware computer as a means to interface the two. UX aims to modify the human and invade their state of mind. Doom scrolling is an example of great UX. Interact vs subdue. I want to experience the meaning of the email not the email application.
I don't think it's weaselly: it's not the first term that has lost its original meaning (like "hacker" or, ahem, "cloud") and required introducing specifiers to go back to the original meaning.
For fun, I did a search for "user interface" before:1996-06-01 .
I found a paper that was definitely taking the perspective that the "user interface" encompasses all the ways in which the user can accomplish something via the software. It rated the effectiveness of a user interface in terms of the time taken to complete various specific tasks. (While remarking that other metrics matter to the concept too, and also measuring user satisfaction and error rates.)
But that paper also suggested how the term might have specialized - four pieces of software were studied, and they are presented in a table that gives their "interface technology", in two cases a "character-based interface" and in the other two a "graphical user interface".
Enough usage like that and you can see how "interface" might come to mean "what the user interacts with" as opposed to "how tasks are performed".
( https://www.nngroup.com/articles/iterative-design/ . It really is dated 1993, which I made a point of checking because Google assigns the "date" of a search result based on textual analysis, and it is frequently very badly wrong. I can't really slam the approach, which I assume was necessary to get the right answer here, but the implementation isn't working.)
UX includes the possibility that the software will be actively influencing the user, rather than merely acting as a tool to be used. (websites selling you stuff versus a utilitarian desktop app).
That's consistent with your timeline of the decline of UI/UX though. My sense is that the birth of the term UX marked the beginning of the decline because it meant redefining the term UI as being purely about aesthetics, implying that no one was paying attention to all of the non-aesthetic work that had previously been done in the field.
The term didn't really exist, but user experience was a thing. I took a human computer interface class in college about designing good UI. My first job out of college in 1996 I got permission from my boss and the boss of the corporate trust folks to go sit with a few of my users for 1/2 a day and see how they used the software I was going to fix bugs in and add features to. Apparently, no one had done that before. The users were so happy when I suggested and implemented a few things that would shave 20 minutes of busy work off their work each day that weren't on their request list because they hadn't thought it was something that could be done.
I remember it as "human-machine interaction" and "HMI design" or "interaction design". It was mostly about positioning interface elements, clear iconography, and workflows with as little surprises and opportunities for errors as possible. In industrial design, esp. for SCADA, it is often still called HMI.
Yeah, if you wanted to study usability (or what we call UX today), you'd take the ergonomics course, and there'd be usability classes. So you'd learn about how to sit at a desk, how to design a remote control, and where to put the buttons in an application.
It does seem a bit weird, but I feel like this bigger picture is what a lot of today's design lacks.
I have a guy at work who does most of our UI/UX design, and recently one of the screenes we needed to implement involved a list where the user needs to select one option then click "Save". He designed it with checkboxes... some people just have no idea that UX conventions exist.
The fundamental problem with UI/UX is that it’s so heavily dependent on your audience, and most software caters too disproportionately to one audience.
New users want pop ups, pretty colors, lots of white space, and stuff hidden. Experienced users want to throw the computer through a window when their tab is eaten because of a “did you know?” popup.
Enterprise, professional software is used a lot. Sometimes decades. You need dense UI with a UX that’s almost comically long-lived. Experienced users don’t want to figure out where a new button is, they’ve already optimized their workflow to the max.
My impression was that at some point, they went too far with the scientific approach. As in round up all the last persons who had never touched a computer, put them in an experiment and make their success rate as the only metric that counts. Established conventions? "Science says they don't work".
This attack on convention then paved the way for the "just make it pretty" we see today.
Exactly this thread. I use Adobe suite of software, and I really don't even know what GEGL means. Nor should I need to know. Organize filters by function. Blur->radial/Gaussian,linear,etc. Noise->add/remove/etc.
Designing the UI based on how the code a filter operates is cool for where the .cpp files live is not how the users think. Then again, a user of GIMP over other apps probably does filter that user into a more techy side of user than artistic side, so I'm probably eating a bowl of crow soon.
Seems like maybe time for FOSS UIs to start a Fiverr account looking for UI/UX peeps.
And then there's the GEGL stuff that's leaking implementation details to the user: obviously it should be fixed, but I am certain you can find similar stuff in Adobe's products.
I, for one, having recently been pushed to online MS stuff, certainly see plenty of that in their tools (too many, really, even worse than GNOME ever was when I was active there).
>I am certain you can find similar stuff in Adobe's products
It’s just not comparable and I’m sorry with the history of GIMP it’s all just indefensible, let’s not forget in the two decades it took them to implement Adjustment Layers Blender started focusing on the user not the developer and became a huge contender against non open software, hard to find 3D artists under 25 who didn’t learn via Blender and use it professionally.
An opportunity completely squandered by a poor culture.
GIMP itself has been going on for around 30 years now. I think it proves that the approach to development and design is "defensible".
Blender entered where there was no other good competitor in the market, with a company behind it that built a business around it, and set the standards for UI.
GIMP always kinda had to fight against the incumbents that are too ingrained into customer muscle memory to accept any change. Really, are you saying that the location for GEGL filters in the menu is what stops you from using Gimp?
So the GIMP team wisely chose not to fight, and to build their own thing that serves (hundreds of?) thousands of happy users worldwide (I am one of them: I don't do image editing professionally, but people have complimented me on what I've achieved with Gimp; similarly, moving away from Gimp shortcuts would be expensive for me and would make me really hate any big change of the sort).
Think we’re revising history here, Blender was completely shunned until they started doing the exact opposite of GIMP and building for their users instead of building for the sake of building. It certainly had no first mover advantage and even poor students spent thousands just to not have to learn it and no professional studios used it.
This all changed within a few years of them fixing the UI and focusing on users.
>GEGL filters in the menu is what stops you from using Gimp?
It’s one example but my point is already proven by you calling them “GEGL Filters” step back from your biases for a second and really think about what you wrote and the wider implications to the rest of the application and its users.
People just feel they instinctively have to defend GIMP because it was one of the early larger real desktop Linux open source successes but to me it represents one thing, completely wasted opportunity and the importance of how culture and ideology of a team can squander something that could have been amazing.
“Oh people would never have switched from Photoshop, the workflow and keys are different” is pure cope, we know this because Figma decimated Photoshop and Illustrator as web/app design tools in about 2 years just by offering a better tool.
GIMP could have done this 20+ years ago with the right ideology.
Gimp was ever only an enthusiast developer-driven, and 1-2 engineers that it actively had working on it could not have pulled it off.
It has nothing to do with the ideology, just sheer complexity of the effort: GNOME HIG in the 2.0 times was very much focused on good, consistent UI that caters to the users (mostly driven by Sun Microsystems contributions).
But bringing individual examples of bad UI (I can do so for MacOS, the poster child of usability too) does not mean it's like that on purpose — it's mostly just that, bad instances.
A program is usable based on the whole experience with it, and the results one can achieve. Gimp is not perfect (far from it, really), but for a set of usecases, it is perfectly adequate.
The success or lack of it is not only driven by usability: there are perfectly good tools that simply bit the dust for who knows what reason.
And looking at https://www.blender.org/about/history/, I think Blender mostly owns its success to exactly the marketing approach and not any of the technical properties (its parent companies actually died twice before it was made free software and a base for community competitions).
Which is exactly how I remember it as an outsider (I wasn't interested in 3D at the time).
I don't get the implication. I never claimed I was a designer. I'm a get shit done type of person. An operator if you will. Scariest thing you can show me is a Cmd-n blank sheet of paper. Give me content and a task, and it'll get done. Your assumption I'm a designer is just that, and I'm perfectly fine being an ass on my own and do not need your help
I think they might have been implying that you’re not the person they should be designing for. it’s arguable that they should be designing for people who have never used an image editor before. if they were optimizing for adobe users, they should just copy adobe as much as possible
A design persona is an imaginary person that designers use when thinking about users at a medium level of abstraction…somewhere between actual users and demographics.
I didn’t assume anything about you. I took your words at face value. To the degree I said anything about you it was that perhaps Gimp is not for you (because everything isn’t for everybody).
A "design persona" is a stereotype that designers use to justify their decision to discriminate against some subset of users. It seems innocent at first, but it inevitably devolves into the developers crudely binning individual users who submit but reports of feature requests into those stereotypes and then, very often, disregarding their feedback without thoughtful consideration.
"Our Personas document says this software isn't for engineers who taught themselves to use Linux in highschool. This guy who submitted a request to add tabs to the interface looks like a nerd, we don't need to take his suggestion seriously even though he's suggesting a normal thing people who fit into other personas would also find useful."
I think a better UX for average consumer would be more a side-swiping filter menu similar to that of social media mobile apps, with different non-math style names "default blur", "circle blur", etc. Especially as more people do not use desktop computers today.
Also maybe LLM integration so you can just explain what you want done, then it does it, instead of needing to follow some tutorial to learn the software
> Also maybe LLM integration so you can just explain what you want done, then it does it, instead of needing to follow some tutorial to learn the software
I like how this counts as a reasonable side remark today but would have been utterly delusional just a few years ago.
I remember writing the documentation for the payment processor app used at Iron Mountain and the flow for dealing with a check deposit was incredibly convoluted. The (Windows desktop) application was designed by a team from one of the big 5 consulting agencies and they clearly had never thought about how the application would be used when they designed it.
That's the classic stereotype. What we often find in open-source media applications is intentional and pompous obscurity. "Engineers" use the same words end-users do. Choosing meaningless jargon is just douchey.
that’s not it at all. everyone implementing an image editor knows what a gaussian blur is, but the average person doesn’t. it requires active effort to forget what you know and empathize with someone who is seeing these concepts for the first time. In my opinion, it’s an effort that the volunteers working on GIMP aren’t obligated to put in if they don’t feel like it
The GIMP team actively changed their software to better support user workflows, like when they moved from "save as" (with image formats as options) to "export". So there definitely is intent there to do the work necessary to make the software useable for their target group.
Problem was: the change was explained in terms of user persona and their workflows, but there was no mention of user tests...
Honestly, I found that one of the most user-hostile workflows they implemented to date. It's really obnoxious.
The number of times I've wanted to save in their native XCF file format is... zero. But I always want to save in a standard image format, and I don't really consider that to be exporting, just saving.
I understand why they wanted this, but I don't think many of their actual users did.
They do that to preserve data. If you’re making a complex image with all sorts of layers and masks and then you save to a JPEG, you lose all that information as the image is flattened and compressed. Saving in the native format lets you be able to open the file again at a later time and resume working without losing any data.
Users would be seriously upset if they made JPEG the default and the native format a buried option. People would be losing data left and right.
Saving as XCF still loses the undo history so it's really a question of which/how much information is lost. Meanwhile if you have a single layer image and export it to PNG which preserves as much relevant information as saving it as XCF it will then still complain about unsaved data if you try to close it. Absolutely infuriating behavior that no real user ever asked for.
Affinity does the same thing; I don't remember about Photoshop.
The obnoxious thing is separating "save" and "export" into different menu items. Much (most?) software lets you choose "save as" (including saving as a different format) from the regular File/Save dialog. But Affinity Photo (and apparently GIMP) forces you to cancel out of the Save dialog for the millionth time and go back to the File menu and choose "Export." It's annoying and unnecessary.
I don’t know, pretty much all production software I’ve ever used has made a distinction between export and save. Because export takes compute and can change the output, not all formats are created equal.
Saving in the internal format is probably rare if you’re just a user, but if this is a 40 hour a week job, then the compute time savings and potential disk space saving from doing that might be worth it.
The problem not being able to make the save/export decision from the same dialog. A lot of software lets you do "save as" and pick a different format AFTER you go down the File/Save path.
Having to cancel out of File/Save and go back to the File menu and choose File/Export, over and over and over in software that defies this convention, is incredibly irritating.
That's only true if the engineers are not allowed to copy/steal from existing designs. There are plenty that are better than GIMP (e.g. Photoshop, Krita, ...). If nothing else, make it easy to build a layer on top so that Photoshop can be replicated nearly exactly.
The one thing I absolutely loved about Ubuntu's original unity desktop was the HUD. Specifically for big complex applications like gimp, libreoffice, kwrite and such. Things that I use infrequently and have no way of knowing all the menu items.
You mean, there is no omnisearch yet? Well I guess GIMP hackers are more accustomed to emacs and Vim than say, Brainjet IDEs, but by now I would expect that kind of quick access to be in any software which have that many tools on its belt.
Also, yes, definitely users shouldn't be bothered with in your face nomenclature which is irrelevant to the action. This is nothing specific to software engineering though, compare abelian algebra and commutative algebra, cubism and orphism, etc.
Naming things after the most pondering phenomenal trait of what's designated is often in competition with many other perspectives.
If I understand correctly, "omnisearch" would be pressing a button to pull up a box to search through all the menu options? If so, then yep, GIMP's had that for a long time by pressing "/".
GIMP is great software with sometimes less-than-great UX.
I wonder if a project that replaces the "chrome" of GIMP with a different UX would be viable. Imagine a reworked menu / shortcut / dialog system that controls the unchanged core. Even better, imagine UI and UX to be live-tweakable, written in Python / Lua / Guile / you name it. That would make discovering better UI layouts and better UX flows absurdly easier.
(Yes, as an Emacs user, I want more software to be like Emacs.)
With script-fu or python-fu you can make menus sub-menus and similar UX adjustments. I also make a command, and just run it using slash: /command. Better than clicking around but not live-tweakable, you have to refresh scripts first. In case of some plugin errors, sometimes Gimp just dies, which is a problem when trying to develop the plugin.
Script-fu plugin experience is definitely not great, but it has the potential to customize stuff.
Script-fu however it totally limited, it cannot access files, it cannot do anything outside of Gimp in contrast to Elisp. I wonder why that is, security reasons, as a protection from malicious plugins?
Python-fu is another option, I haven't used it but i want try it at some point. When i find some simple examples of python-fu code to learn from, I want dive into it a little bit.
I like your first idea (new UX layer over unchanged core) more than the second (fully customisable), although they're not mutually exclusive.
An understated benefit of a consistent UI is if the user gets stuck and searches how to do XYZ, often an LLM or search engine will give an accurate answer as it's been answered before in forums etc. But if the UI changes every few months, there's often no answer.
It's always the same trade-off between having a uniform experience and the ability to fix unfortunate decisions more easily.
The UI default shipped should not materially (if any) change every few months. But power users should be able to tweak the UI and UX even more, and publish their tweaks. Some sets of tweaks might become popular among other power users, and the best finds could find their way upstream.
What I strive to achieve is speeding up the process of UI evolution, of finding better approaches to UI and UX. This may be enabled by an ability to tweak UI/UX without recompiling the whole thing, and by not having to write the UI/UX in a low-level footgun-ridden language which is C. Even now GIMP allows for quite a bit of customization within its built-in UX.
The untweaked, vanilla experience should be good enough, stable, and the norm for non-advanced users, very much like the current UX.
For a battle-tested example, look at MS Office. You can tweak its UI in rather drastic ways; I've seen VBA apps that make Excel barely recognizable, while harnessing its power. But most users never ever alter a single button on a single toolbar, and are fine following video guides showing where to click.
For the latter you make a special “dialog” that has all the features in a single list. You make it anyway, because it’s a part of ui customization menu, but this one is separate, for search uses.
Pressing where, in gimp? I was talking about an average app with customizable ui. But in general, search can’t replace a list because search is not discoverable. You can only search for what you know and remember and you may not remember e.g. “Morphology…” until you see it in that section, how was it again.
More generally, and just as a tangential sentiment, I don’t understand the last decade trends of dumbing down apps and removing (/not adding) features because X is enough. Back in the day we had full dialogs called by a shortcut and having searches, menu builders, etc. Nowadays everyone tries to sell only a shortcut, or only a filter, or only nothing. But why-y? Humans have multiple mechanisms of memory and orientation (and these may heavily vary across population) and people are throwing it away deliberately.
Just start from scratch at that point, is there much of value in the core of GIMP?
It’s all pretty antiquated and very 90s-00s level in terms of capabilities. Not even talking AI more talking text editing and non-destructive processes and GPU acceleration.
I know 3.0 aims to address some of this but it’s too little too late and you wouldn’t get much benefit being downstream from a low output team.
I think there is a lot of valuable things in the engine part: data presentation, tools, filters, non-destructive editing, etc. I wish GIMP had a "narrow waist" at this level, like games are usually split between levels and scripts and the engine, or compilers are split between parsing / syntax / high-level concerns and code generation at the intermediate language representation (the part "below waist" is often LLVM).
It just isn't the foundation you'd build on if you valued these things, they only added non-destructive recently it's not like it was core from the start, there's many other projects you could build upon that have this at their core.
Krita or honestly even Blenders shader graph would be a better and more modern starting point if that's what you wanted and you'd have a more active base where you're going to get value downstream in optimizations and more filters on a timeline that isn't multiple decades.
I remember Gimpshop, and I think it was a good experiment.
I also think that the effort to create such experiments should be made lower: supported, reasonably future-proof code that runs within the host application, instead of a laboriously maintained fork.
Really weird that none of this is included as a screenshot or GIF in the release article - my opinion is that this matters a lot when you release a major version of a gfx package.
If it's a menu or toolbar and it mentions GEGL, it's wrong. GEGL is not something that end users should have to care about. Not to mention superfluous, since almost every fancy operation uses GEGL under the hood anyway.
May I ask a not so smart question... What is the big deal with thumbnails for YouTube videos. Like, I am always hearing about these thumbnails as if they can make or break a video/channels outcomes.
Youtube content is driven heavily by children and sheltered people who are easily engaged with animalistic displays of expression and bright colors. I wish I was joking
Mostly videos about:getting chai lattes on the roof, 3 course meals from 4 star restraunts for lunch and yoga sessions in the early afternoon after a quick meeting with the manager. Aka "Average day in the life of a YouTube engineer"
Traditional print publishers put lots of efforts into making the covers of their books and magazines attractive and indicative of what you can expect inside.
You _can_ judge a book by its cover, _if_ the publisher is doing their job right.
Sorry for being perhaps a bit pedantic but, not really. You can (and should) get an idea about what the book is about by its cover. But you cannot "judge" the (quality of the) content of the book by its cover.
That would be like saying you can judge the quality of a cereal-brand by its box. Special-K? That is really special.
The Book might be full of lies or incorrect information no matter how beautiful its cover is.
You can judge the quality of the cover of the book by looking at the cover. :-)
None of these will be surprise Weetabix in soy sauce. None is actually a frozen pizza. None is garden fertiliser.
Similar with books and youtube videos. It's not a claim that "beauty is truth and truth beauty", it's a claim the thumbnail/cover tells you what the video/book is about.
For "quality"? Good branding used to be expensive, before GenAI (now I'm not sure). If you could afford it, you could afford good content; if you could only afford good branding or good content but not both… it happens, of course, but it leads to angry refund demands IRL, and no organic viral growth for videos.
Considering that people have been making such complaints for millennium and we aren't so terrible as so many generations of detiraration would imply they're usuallybut not always wrong
You just don't know how bad you are or how bad your society is. Do you think the Taliban thinks they are fucked up? I'm not comparing you to the Taliban, I'm just asking what do you think they think? It's important to think about what we think.
I did a bit of digging, and just like eg your typical American will tell you many of the ways they think the US is 'fucked up', the Taliban will also tell you where their own society falls short of their ideals.
Of course, their ideals will most likely differ from your ideals. In addition, even where multiple people might share abstract ideals, their ideas about how to effect these ideals will often differ.
Anyway, even if people have always complained about it (which remains to be proven really; inb4 single data point of Plato), doesn't mean it's always or even most of the time wrong.
The "kids just lack manners these days" should also be separated from more detailed criticism.
That whole thing about denying that decadence is even possible is so comically a symptom of people being part of said decadence not wanting to admit it.
Thank you, you are much more diplomatic about this. I wrote a few horrifyingly aggressive responses to a few here about this because I'm a little astonished that people still give this whole phenomena a pass.
From my other post:
Overall, when the parents lose control of the child, the culture takes over parenting. When the culture is ridiculous, the child grows to become a ridiculous adult and won't know it and possibly even defend it.
This is only true recently! Through most of history people lived and died with technology and culture nearly identical to that of their parents / grandparents. In terms of tech and cultural evolution, we are on the uptick of a hockey-stick growth.
It's absolutely accurate that `kids these days` have grown up in a different environment than `grownups these days` than the same demographics from 50, 100, 200, or 1000 years ago.
Were you raised as the TV generation or the video game generation? Because I guarantee you your elders were saying the same thing about your generation.
Okay. You don’t think this is serious. Hang out on Twitch for a week. If you don’t walk away thinking it’s a crack den of addiction and sex then, just ew, stop talking to me.
Though I haven't seen stats to back it up - I've heard from multiple sources that thumbnails which include a gigantic bobblehead of the author with a particularly exaggerated stupid looking expression on their face induce more people to click through.
Even if it's for the sake of feeding the algorithm, I do my best to skip them.
I also internally prioritize videos which:
- avoid usage of superlatives "TOP X", "BEST OF Y"
- have more than 5k views and less than 250k views.
After a while, my YT recommendations have become mostly solid.
From looking at the link you posted, the immediate consideration, is it looks like they're all optimizing for a pornography face. Would not be very surprising based on the reputation of the internet.
It doesn't exactly help with your goal, but tangentially, I use a browser addon called DeArrow, from the creator of SponsorBlock, which replaces thumbnails and clickbait titles with a video still / user-submitted ones. I often forget it's installed until I use another browser, but it's a really nice experience!
> that thumbnails which … induce more people to click through
Entirely conjecture on my part, but I imagine this _was_ true, has now been done to death, and no longer has any juice left in it. It’s how all the marketing stuff goes: discovered, early adopters get great results, everyone starts doing it and it loses any value.
> discovered, early adopters get great results, everyone starts doing it and it loses any value.
You might still lose out, if you don't do it?
Just like virtually every car these days has great safety features, so it's not a good selling point; but just try selling a car with 1980s levels of safety.
Possibly! I guess it's probably best to be guided by if the real pros are still using them. I think people are less likely to click on obvious clickbait these days, precisely because they recognize it, than on more authentic headlines. Supporting your PoV though is that MrBeast's noggin' is still prominently in his thumbnails.
You can configure it to not be enabled by default, but you click a little blue circle next to the title and it will show you the community version.
Plenty of good content is forced to play the clickbait thumbnail/title game and it would be a shame to miss some of it because of YouTube's incentive problems.
gigantic bobblehead of the author with a particularly exaggerated stupid looking expression on their face induce more people to click through
I never avoided these. They naturally make me puke and disgust and want to smash their degenerate faces if I ever see one on the street. No need for doing my best. The realization that so many people happily click through that was sickening at the time. It’s “open doors” party in asylum and people rushing in in excitement.
Btw, many channels seem to moved on from that, in self-moderation after a short period of experiments. Those who stuck to it showed the most increase in mental deficiency and turning to stupid comedy/meme show rather than original material. One example of that were these new LTT formats, afair.
You know how YouTubers are always talking about how "the algorithm" didn't like this video, or loves that video. Or that "the algorithm" is a huge black block which nobody knows how it works.
Youtube's "the algorithm" will make or break both videos and channels.
But "the algorithm" isn't really a mystery. At a basic level, it just shows a bunch of video recommendations to viewers, and measures if they click it or not (watch time, comments, likes also factor into the algorithm, but none of that matters if they don't click first). The higher the click-through rate, the more the video is pushed in recommendations.
And the only things a viewer sees is the thumbnail, channel name, and video title. They have to decide which video they are going to watch based on just that.
So really, a large chunk of "the algorithm" is just how appealing your thumbnail is to potential views.
There actually is an outcome worse than a missed click: if too many viewers are abandoning your video within the first few minutes (i.e. before midrolls), you'll experience substantial downranking.
More or less, the art of making a successful video requires:
* An attention-grabbing thumbnail
* A curiosity-provoking title & premise
* A strong hook which convinces the viewer to put the screen down and let it run
* Editing which delivers the information at an engaging (yet monetizable) pace
* Packaging said information so that it is intelligibly balanced across the mediums (audio/text/video)
* ^^^ Doing this all in a style which still retains enough uniqueness to establish a repeat viewerbase
"The algorithm" is a system for efficiently delivering novel videos with these qualities to the audiences who will most eagerly consume them, which is an essential function for a platform with 2 billion monthly users. For every video on lowest-common-denominator celebrity junk, there's a dozen niche videos tailored to some ravenous subculture or other. Not all magazines are tabloids... but just about anyone can kill time with a tabloid, so that's what leads.
Unlike magazine stands, however, the platform will eventually learn to only show you the thumbnails for videos you'll want to finish watching. It's almost embarassing to share... but here's an example batch of 12 recommendations, almost all of which I'm likely to (eventually) click on and fully watch: https://i.imgur.com/dygfXXb.png
It requires all of these things, yes, but it also must deal with a topic approved by Google. So not offensive to an american coastal liberal type of person.
One popular channel who commentates on american police body cam footage, replaces the gunshot sounds with animal noises. It's ridiculous. Also no discussion or use of tobacco. Many times I've heard people refer to Adolf Hitler as "bad mustache man" for fear of getting censored or demonetised. One channel that discusses historical firearms, is censoring the 1933-1945 german reichsflag. And so on and so on. It's all so tiresome.
It's unfortunately not just YouTube but all social media. Self-censoring like "unalived" or 'k**ed" to avoid the wrath of the algorithm is becoming all too common.
I think the mystery is in the categorization of interests and user profiling. The click is central to the process, but the magic is getting your pie video in front of someone on a baking video binge, and not someone trying to to fix their oven.
Think of it like a book cover. Regardless of a book's title or summary, an attractive (attracting? attention grabbing?) cover will get more people to click it. Also depending on where the thumbnail is displayed, you may not see the full video title, such as the grid after a video finishes playing.
I do hate that style of thumbnail too. But hey, it just tells me not to click on the video. So it serves its purpose for me. But metrics I guess tell the youtubers that that thumbnail brings in lots of traffic. I think especially younger kids gravitate to it, unfortunately.
It's a smart question. It wouldn't seem to make much difference since you'd assume people would have channels they're subscribed to or search for something specific. Much of YouTube watch time is based on discovering new videos. And a big part of that is the thumbnail people see when they're looking at suggested content, related content, or search results. Colors, whether there are faces, fonts, images, etc all make a difference. And it varies over time and genre. YouTube has tools built in so you can A/B test different thumbnails and automatically select the better performing one.
Note that I haven't done any of this myself except for making a couple thumbnails for personal videos. I was curious and watched a YouTube video about thumbnails and why they're important.
At the end of a video, youtube shows thumbnails for several "suggested next videos".
The thumbnail, video name and channel name are the only bits of information potential viewers see - if your thumbnail isn't good, they aren't even going to _start_ your video, let alone keep watching it.
I think there's a minor confusion going on here with respect to what constitutes a good YouTube thumbnail as well as what are good YouTube contents.
Very few of channels I subscribe and watch, such as Forgotten Weapons, Technology Connections, Scott Manley, USCSB, CuriousMarc, media.ccc.de, etc. uses that open mouth YouTube face for their thumbnails, if they display a human face in a thumbnail at all. I would consider Doug Demuro videos to be embarrassingly deep into "typical YouTube stupidity" realm to admit watching, but even he tend to leave his mouth less than fully open.
Do they not engineer their thumbnails to "appease the algorithm" - they do, by showing accurate and intriguing previews of what is to be presented, that for those channels often happens not to be an adult male human face with all orifices articulated to near or past mechanical limits, which, by the way, is an another one of optima.
The statement "one can(not) judge a book by its cover" is not functionally equal to "books dipped longest in fluorescent yellow dye sell the most", at all. Apples and oranges both have their place.
It does sound silly, but it also points to shallow nature of YouTube content. If you don’t have a good thumbnail it’s going to limit the success of your video because some percentage won’t click on it. It’s just how it goes.
From my practical experience, they seem to make a difference in initial engagement which in turn makes a difference in algorithmic promotion which in turn makes a difference in overall “performance.”
Tracking what you click on versus what you don’t, you might recognize that certain thumbnails make you more likely to take a pass.
For clarity, my experience is at hobby scale and part of that hobby is learning about youtube. Trying to make the line go up is fun (until it isn’t) and a little attention to thumbnails seems to make the line go up.
Everyone knows you need the mouth open dumb face or the completely fake AI image for max brain rot which drives views and "engagement" (commenting on the fact that it's nonsense with people then defending the video as an authority as if it isn't stock footage with an AI voice reading a reddit comment).
* Drawing geometric shapes still requires dealing with paths, instead of having a pre-defined set of the most common ones, like in any other drawing program.
> UI/UX: "Tool->GEGL Operation..." is too much friction for such a common operation- just pop it up when you click on the "FX" button in the layers window.
You can make this a button in the toolbox in the settings.
WTF is a "GEGL" operation? I've been using (and have written) image-processing and -manipulation tools for decades, and I've never heard of such a thing.
GEGL is just a library created to modernize GIMP's image manipulation pipeline. It forms a DAG of image operations. It's what unlocked non-destructive edits and porting everything over to it was a pretty massive undertaking (though it probably didn't need to take 20 years...)
End users really don't need to know about it. Its exposure in the UI is likely just because a lot of stuff it can do isn't available yet in the traditional GIMP UI.
I'm a data engineer so DAG/node systems in content creation software delights me (Blender, Resolve, Houdini). Terrible it's hidden behind the name "GEGL Operations". I'm exactly the person who will get it and love it immediately, but will never find it. Sounds like your parent commenter is the same. GIMP has always been a leader in UI self-owns.
"End users really don't need to know about it... a lot of stuff it can do isn't available yet in the traditional GIMP UI"
So why "don't they need to know about it?" And regardless, putting a meaningless label on it is a user-hostile blunder. This blunder is not all that uncommon either. Affinity did it by burying a bunch of stuff under a menu item called "Studio." Not as bad as "GEGL", but still meaningless.
I think you maybe didn't see the "yet" in there. This is software written by unpaid volunteers. There is no user-hostility, only a lack of time and help at implementing things.
I wouldn’t exactly, but some menus are more predictable (for me, at least) than in Photoshop etc. Filters are also more logically organized (or at least were before 3.0, I haven’t tried it yet but given the discussion around whatever the fuck GEGL operations are doesn’t give me a lot of hope)
I never made youtube thumbnails, but if I had to, I’d take 100 paint.nets and 0 gimps. Everything a youtube thumbnail ever needs can be easily done in a few minutes in paint.net. It’s like a standard toolbox for graphics.
GIMP is almost designed to be passive aggressive and sometimes hostile to its user. Last year I had to regularly process some images in gimp on linux (cropping, composition, other basic ops) and seriously considered installing samba, qemu and windows only to share these pngs with paint.net. I can’t even fathom the pain of having to do something like a thumbnail in it.
Have you looked into if Paint.NET works with Wine? The latest report says the was an issue with the theme, but it otherwise worked for them, but then they didn't really test much. They were not trying the latest version of Wine at the time they were trying paint.net.
If you like Paint.NET, it might not take that much effort to file bugs against Wine for any problems you find until it works correctly. There is one user at least reporting that
use that. If you want something like krita, use that. And if you need something like gimp, paint.net is not even in consideration because it is decidedly something else
I tried all of that and they all just suck. E.g. neither pinta nor krita don’t show selection rectangle coords in the status bar. How the heck should I e.g. investigate screenshot-based automation fails with that? What makes paint.net stand out is that it is a pixel canvas tool with layers rather than an artistic canvas for soft brushes. And at the same time it has all its features easily accessible.
I have also worked as a “designer” in a real typography for some time and know a thing or two about the process (not fully, but count me as sort of an insider). And I can tell you that gimp is only “an artist’s impression” of a graphical toolbox, and that artist is heavily drunk. If you need something like gimp and paint.net/etc isn’t enough, you need photoshop. Cause we’re talking about serious pre-print color management, etc. Gimp is two parts: very poor home graphics editor and very poor industrial graphics manager.
Youtube thumbnails are paint.net 100%. I use that all the time for all my graphics, technical and creative, tried all others, and have no reason to switch, apart from OS requirements.
As a developer of these apps, you owe me nothing, no need to treat every explanation as a demand. I just chose an app that has basic features like selection coords from the get go and shared my experience on a forum where it might be useful to others. How exactly people are using these coords -- for automation, or pixel arts, or precise composition/aspect ratio, or something else -- isn't relevant, same for other commonly present features.
I just tried Pinta and retract my words about it, cause it shows coords. This could help with my gimp suffering.
But generally it's really not as nice as Paint.net. I don't like the ui at all (washed and bleak, hamburger menus, checkboxes on layers swallow second clicks, clicks on icons sometimes demaximize window, etc, also slow). It's just bad. Idk if it's gtk3/4 or pinta itself. But it generally works and apart from that I'll give it a 4 out of 5.
Pinta *IS* Paint.net, just forked at an earlier stage before the latter became closed source software.
Also, is "washed and bleak" really that big of a problem for an image editor? It just doesn't matter what it looks like as long as the UI is intuitive and has the features that you need. It should also be noted that Pinta very much looks just like your overall Desktop appearance on Unix. I'm on XFCE and it's so incredibly theme integrated it looks like it's part of the system.
Personally, I really like Pinta. Biggest problem is the bugs and crashes. Wish I could use actual Paint.net though, but there's no way to use it on Linux.
Created a quick poster: https://i.imgur.com/pPgy255.png Stuff that needs work:
* UI/UX: "Tool->GEGL Operation..." is too much friction for such a common operation- just pop it up when you click on the "FX" button in the layers window.
* UI/UX: Naming. Drop shadows and glow are currently not discoverable (its squirreled away in the generic "GEGL Styles").
* UI/UX: "Move Tool" should act like a common entry point to other tools if you're not dragging. Switch to "Transform Tool" if I single click an image layer. Switch to the "Text Tool" if I single click a Text layer! Please!
* UI/UX: Copying/pasting layer styles does not work. Users can overlook many issues if you can duplicate/destroy layer styles easily. Preset system is cumbersome. Idea: Presets usable from the Layers window directly (could be just add/apply presets) would help a lot, but just copy/paste would probably be better.
* BUG: Layers often clip GEGL Glow. Again could be worked around by just easy copy/paste of layer styles. See clipping present on "GIMP Halloween Party" text in my image.