[fpc-devel] 134 open merge requests - is that normal?
Martin Frb
lazarus at mfriebe.de
Tue Apr 7 15:41:34 CEST 2026
On 07/04/2026 14:44, Marco van de Voort via fpc-devel wrote:
>
> Op 7-4-2026 om 13:36 schreef Martin Frb via fpc-devel:
>>
>> Well, what I can say for myself (slightly modified for those less git
>> experienced). If I make a commit, that I deem might be a fix => I can
>> immediately make a local cherry pick.
>
> If it is a pure fix, maybe. If it is part of a restructure, that is
> harder.
It should still be "simpler right then, than at any time later" ?
**If** you decide it gets picked, then at some point you have to do the
work of picking it.
When you "just" (recently) written it, and still remember the decisions
for the little details, then it should be easier, that it will be when
you return to it later.
Worst case, you have to (partly) rewrite it for the fixes branch (if
that is allowed).
>> I don't have to push that. I can keep it until I know I want to push
>> it. (I can also push it elsewhere and ask for review).
>> If there is other incoming on the branch, while I haven't pushed, the
>> conflicts with that are usually smaller.
>
> I hardly can leave commits in local repos for long, as I work on
> multiple machines, so that would not be an option for me.
>
> Yes, I know I can invest an insane amount of time to interface it via
> my own gitlab repository, but that would only be more work, both in
> setting up as in daily use.
Actually, you only need a private gitlab if those machines can't see
each other via network (not even at any time).
But yes, its a bit of setup once (I don't call it insane amount, but if
its all new to you you may spent a day...)
E.g. I have my laptop and desktop (and all my VM) speak directly to each
other.
I do a git pull, on my laptop, it always pulls the official gitlab, and
my desktop (if its on, and my laptop is in the home network / otherwise
the remote is unreachable).
For VM, I use access via folder. For my laptop I run "git daemon" on my
desktop
- spent an hour (2 if there are issues, 3 if you have a firewall that
really is acting up) to set up git daemon
- spent 10 minutes to do: git remote add NAME url
- spend a bit more time to learn setting different remotes to your
branches before you push them
If you work with a GUI the last step is easy too.
And, last not least => before you push into an local repo, make sure the
branch you push is not checked out. (should give an error, iirc)
I solve that by always working on a new branch when on my laptop (or any
2ndary).
>
>> And if you need a final selection, you can cherry pick from the
>> cherry picks...
>>
>> IMHO git offers some small additions over SVN here. But the real
>> advantage only comes if everyone participates.
> IMHO only if you fully follow the git suggested workflow to the
> letter. As soon as your workflow situation differs, the whole thing
> comes crashing down. IMHO git has the doubtful attribute of being both
> too flexible and too inflexible at the same time.
I don't think I follow the "so called default" workflow at all. I do a
lot an the main branch. I hardly use merge, I do plenty rebase instead. ...
>>> To be fair: merging in the compiler is harder than what I do , since
>>> things are more interconnected there.
>> Which imho makes it even more important to merge/pick "potential
>> fixes" asap. Once you start forgetting all the nitty gritty of your
>> change, picking them becomes much harder.
>
> There is IMHO still a big difference between immediate and after a few
> weeks. The need for immediate merging action without first seeing how
> it fares in main is IMHO too limited. That is the whole idea of
> having two branches.
Then have 3 branches
- main
- maybe fixes
- fixes
Merge (cherry pick) immediately to maybe fixes. Since that already has
none of the "non fix related" commits, you will likely get do to most of
the conflict resolving then.
Later, if confirmed, you can cherry pick again. That can have another
conflict, but is less likely to have.
You can have a "dummy branch/tag" on "maybe fixes" that you move
forward, to the first not yet "merged to real fixes" commit.
It' not a standard workflow. But it should work...
For me "maybe fixes" is my local tracking branch of fixes (so I need to
rebase, if others merge to fixes). But 99.999% of times so rebases are
fully automatic.
And because I keep it local, I can just remove the commit (skip in rebase).
>> They are on the attract others side...
>
> So I stand by my conclusion that it is not the landslide feature as
> it was made out to be.
well, true. They are still contributed patches.
The possibility to view them online is personal preference.
>
>>> Anyway, if we reverse the order, then if I understand everything
>>> correctly we would have the robo commits to advance the fpcbuild
>>> submodule link in the fpc/ repo ? No thanks!
>> Ideally the would be part of the commits that would make them a
>> requirement.
>
> How do you enforce that? A post commit script at gitlab that updates
> and the squashes the commits into one?
I don't know the workflow. But lets say I committed a change to FPC
sources, that also needs an update in fpc build.
Then I need to make commits to both repos, right? The actual content
change in fpc build (required for my fpc src change) does not come from
nowhere.
So then, if I made both commits, then it would be my responsibility to
change the submodule in the same commit, and push it all at once.
Of course, this requires every one to know how to do this. (or do an MR,
then someone can take my changes, add the submodule stuff, maybe squash
it, and commit it for me)
>>
>> The robo-commits basically signify that they are one single repo. The
>> split is arbitrary, the could be all in one git repo.
>
> I think the main reason is to be able to check out the multiple repos
> (that are IMHO separate administrative units) as one checkout with one
> branch name. E.g. for release and snapshot scripting. Also in the
> past, fpcdocs was versioned, that might also have been a factor. (the
> fixes fpcbuild links to the fixes docs). You could switch the version
> of the docs on a per branch basis (for trunk-fixes-release
> candidate-release).
Yes you want to be able to build any old version (or any branch) => but
more point on : any old version. Pick a random fpc-src commit, and you
should be able to get the right fpc-build.
That is working right now, or at least it should.
But if you had one big repo with both contents in one, that would work too.
And it would work if dependency order was reversed.
But that also show the point.
You want to build a specific version of fpc sources.
You (probably) never start and say: let me build "fpc build" from Jan,
2nd 2024 noon? (eg git bisect) So your start point is to go to the
correct fpc source commit (history or branch). And then find the fpc
build that mathes.
So that would say, fpc build should be the submodule.
>
> I might be doing something wrong but iirc this didn't work for me on
> windows for RC1 and ended up hand copying repos in each other to get
> a buildable complete tree, so the current situation is not ideal.
I don't use submodule myself. So I don't know how well they really work.
(when/where/if you need to give arguments like recurse sub module / and
when/how/when to restore when you managed to get them out of sync.
>
> If git is too inflexible, I rather would lean towards adapting those
> scripts for multiple checkouts than stuff everything in one repo. But
> that depends on my guestimate on the reasons, and might require
> careful synchronizing branch names between repos, so that you can
> check them all out with one branch name.
If you do only build TAGGED version (or HEAD of a branch), and you keep
your TAGS correct across repos, then you probably don't need the submodules.
But if you go to a random commit in fpc sources, then you need the
correct commit in fpc build. And there is no tag.
The submodule will get you there.
Though, having to search the commit in fpc-build, that has the commit
xxxx of fpr-src as submodule sounds like a heck of extra work.
If the order was reversed, then when you checked out (recurse submodule)
in you fpc-source, the fpc-build should follow on its own.
As I said, I don't use it, so its "in theory"...
> That's because there have been no releases. It usually picks up around
> release time and during the year that a major branch is in beta.
Ok, but then the build / some builds are broken before?
Anyway there is no force to update the submodule link immediately. fpc
build (if it was the submodule) could run ahead for a while, and then
when a nice point is reached, the pointer in fpc source gets updated.
(that means that the fix wont be intermediately visible, but if there
have been broken commits for that long, a bit longer wont hurt)
>
> ... but I agree this could all be overcome if the submodule construct
> is too hard over time. It is from a different magnitude than e.g. the
> merge tracking.
Well, if you have the "cherry picked from" in each commit you can run a
script.
If you missed something, worst case you add an empty commit to create a
commit message.
But, It wont give you a storage for commits that have been actively
chosen for "not to be merged".
Because you have more states, than merged or "not merged"
- not merged
- not reviewed
- rejected (unless required by another commit)
- rejected / blocked
- maybe want to be merged
- merged
not sure if you need a reason for merged
- fix
- dependency
So you need some storage, but you would have had to have that with svn
too? Just it changed to sha1 instead of numbers... (which I imagine
could trouble some scripts)
"maybe want to be merged" could be a branch of already cherry picked
commits. (from which the a 2nd cherry pick can happen into the real
fixes). But a text file does too.
More information about the fpc-devel
mailing list