• 0 Posts
  • 9 Comments
Joined 1 year ago
cake
Cake day: June 23rd, 2023

help-circle

  • My employer had an EV cert for years on our primary domain. The C-suites, etc. thought it was important. Then one of our engineers who focuses on SEO demonstrated how the EV cert slowed down page loads enough that search engines like Google might take notice. Apparently EV certs trigger an additional lookup by the browser to confirm the extended validity.

    Once the powers-that-be understood that the EV cert wasn’t offering any additional usefulness, and might be impacting our SEO performance (however small) they had us get rid of it and use a good old OV cert instead.



  • Port 22 is the default SSH port and it receives a TON of malicious traffic any time it’s open to the whole internet. 20 years ago I saw a newly installed server with a weak root password get infected by an IP address in China less than an hour after being connected to the open internet.

    With all the bots out there these days it would probably take a lot less time if we ran the same experiment again.


  • I don’t understand why Cloudflare gets bashed so much over this… EVERY CDN out there does exactly the same thing. It’s how CDN’s work. Whether it’s Akamai, AWS, Google Cloud CDN, Fastly, Microsoft Azure CDN, or some other provider, they all do the same thing. In order to operate properly they need access to unencrypted content so that they can determine how to cache it properly and serve it from those caches instead of always going back to your origin server.

    My employer uses both Akamai and AWS, and we’re well aware of this fact and what it means.





  • 10 years ago I worked at a university that had a couple people doing research on LHC data. I forget the specifics but there is a global tiered system for replication of data coming from the LHC so that researchers all around the world can access it.

    I probably don’t have it right, but as I recall, raw data is replicated from the LHC to two or three other locations (tier 1). The raw data contains a lot of uninteresting data (think a DVR/VCR recording a blank TV image) so those tier 1 locations analyze the data and removes all that unneeded data. This version of the data is then replicated to a dozen or so tier 2 locations. Lots of researchers have access to HPC clusters at those tier 2 locations in order to analyze that data. I believe tier 2 could even request chunks of data from tier 1 that wasn’t originally replicated in the event a researcher had a hunch there might actually be something interesting in the “blank” data that had originally been scrubbed.

    The university where I worked had its own HPC cluster that was considered tier 3. It could replicate chunks of data from tier 2 on demand in order to analyze it locally. The way it was mostly used was our researchers would use tier 2 to do some high level analysis, and when they found something interesting they would use the tier 3 cluster to do more detailed analysis. This way they could throw a significant amount of our universities HPC resources at targeted data rather than competing with hundreds of other researchers all trying to do the same thing on the tier 2 clusters.