I've just tried this and it grinds my Google Earth to a halt... I doubt it'll succesfully open the file and I have a pretty decent laptop that I'm running it on. So in short, I don't think you'd want to do this.
If you do want to try it, I'd suggest tiling up the dataset into smaller chunks, make every chunk a different KML layer (or even KML file) so you don't necessarily have to have everything visible all the time, and set the kml_visibility attribute to 0 so it defaults to hidden.
I've just tried this and it grinds my Google Earth to a halt... I doubt it'll succesfully open the file and I have a pretty decent laptop that I'm running it on. So in short, I don't think you'd want to do this.
If you do want to try it, I'd suggest tiling up the dataset into smaller chunks, make every chunk a different KML layer (or even KML file) so you don't necessarily have to have everything visible all the time, and set the kml_visibility attribute to 0 so it defaults to hidden.
Ok, it did open the file correctly, although it took several minutes. Once the points are all in it's not that bad really. The "Places" tree is sluggish and querying attributes of a point is slow too, but it's not nearly as bad as I thought.
When dealing with large amounts of data in kml, I find that using regions are key. See
https://developers.google.com/kml/documentation/re... for an overview.
I have in the past used FME to create different LODs, for example points representing 100 000+ airphotos, were reduced to one point per NTS 250K map sheet at coarsest resolution, 1 point per 50K map sheet at medium resolution, and the actually points at high resolution. At any given zoom/area only the relevant points were loaded, so instead of trying to render 100K points and grinding to a halt, there were between 100 and 1000 points.
Also useful is separating your data into different folder hierarchies and have the visibility turned off by default.