Pixelation is essentially lack of information on the screen due to compression. More you compress the video, more you'll see pixels lumped together creating bigger blocks. This is why the source material is very important. Higher the quality source material, better the result after compression. More movement = more information = when compressed, the same information is pushed through the same compression pipe = still video (less information) comes out higher quality than fast moving video (more information). This is where VBR, variable bit rate comes into play, which allows larger margins of compression but still achieving the same or even better result, smaller file size. Compression also does things like reduces colour depth. It's more about the difference between the individual frame. Compression also means that rather than 'redrawing' the frame from scratch every time, it compares it to the previous frame and only 'draws' the areas that have changed. This is why a still video can be very small as there's little change and video with lots of movement but the same length can be twice as big.
I don't see how image stabilisation causes pixelation unless the camera has a very small sensor. Digital image stabilisation looks at the picture and reduces the erratic movement by levelling the shot and cropping the extra bits that are out of the frame, essentially zooming in a bit. Now if the sensor of the camera is not great and/or the IS is too aggressive then that could lead to pixelation due to not having enough information in the video/photo. Just take a jpeg photo and keep zooming in until you see the rough edges in the detail. Also low light can reduce the detail in the video/photo, and you'll see black blocks on the dark screen, like watching Minecraft.
Higher FPS (30 vs 60) provides double the information, by filling in the details twice as fast. This means when compressed, the algorithm has to 'guess' less while transitioning from one frame to another. This makes compressed video look smoother.
Problems come with compression and transferring video. MPEG and HEVC are both already compressed, just like JPEG on phones. This is why RAW is used by professional as it has more information which allows better colour correction, cropping etc. If you airdrop (on iOS) the videos, I've noticed that it actually compresses them more than using a cable. That's why I don't airdrop my videos anymore. Then when you have finished editing the whole video and export it, the transcoding again compresses the video a bit. Then when you upload them to Facebook or YouTube, those platforms compress the videos quite a bit, making them look crappy. It's really annoying looking at videos on YouTube only to see that YouTube has ruined them with overly compressing. This is why better the source material 4k60fps or 1080p60fps looks better on YouTube than 4k24fps or 1080p30fps. Even shooting in highest possible and then transcoding to lower resolution, but higher fps, in the end looks better than shooting everything in the same 1080p30fps transcoding the final video to 1080p30fps.
24fps/25fps are remnants from CRT TV/VHS/DVD era, are used when wanting to create that cinematic effect, but usually most people use highest available mode possible.
I've been annoyed by choppiness in (others) videos and have dug in it, and it tends to be often the FPS mismatch with the project (in editing software) and source material. If you shoot in 24/25fps and your project is set to 30fps, this means the frame rate needs to catch up every second making a correction by jumping ahead, where as if you shoot in 60fps then the editing software just drops the every other frame which is almost invisible to human eye. 60fps is just seen as more lifelike, more vibrant, smooth etc.
Hopefully that makes some sense. Like I said, I'm no expert, but I've gone down the rabbit hole a few times in search for some answers.