#HighAvailability

16 posts loaded — scroll for more

Text
technology-plat-form
technology-plat-form

Germany Server

Our Germany server plans are optimized for high-traffic projects and mission-critical performance.
🔗 https://www.prahost.com/dedicated-servers/germany-servers/

Text
fptcloud1
fptcloud1

High Availability là gì? Giải pháp đảm bảo hệ thống luôn sẵn sàng

High Availability (HA) là kiến trúc giúp hệ thống công nghệ thông tin duy trì hoạt động liên tục, hạn chế tối đa thời gian gián đoạn. FPT Cloud giúp bạn hiểu rõ vai trò của HA trong xây dựng hạ tầng bền vững, đặc biệt với các ứng dụng cần tính ổn định và độ tin cậy cao.

Đọc chi tiết: High Availability là gì?

Text
impactqa74
impactqa74
Text
dclessonsonline
dclessonsonline

Explore vPC consistency checks and failover strategies to maintain seamless connectivity in high-availability networks. https://www.dclessons.com/vpc-consistency-check-and-failover-scenarios

Text
pythonjobsupport
pythonjobsupport

Learn in 30: Stratus - High-Availability & Fault-Tolerant Computer to Run Industrial Software

In this 30-minute webinar, you will learn how Stratus servers: – Monitor their own health. – Alarm directly to the HMI.
source

Text
dclessonsonline
dclessonsonline

Understand the concepts of Hot Standby Router Protocol (HSRP) with our in-depth guide. Learn how to configure HSRP to ensure high availability and reliability in your network. https://www.dclessons.com/hsrp-concepts

Text
likitakans
likitakans

Building Scalable Web Applications: Tips and Techniques

In the ever-evolving world of web development, scalability has become a crucial factor for the success of online businesses. A scalable web application can handle increased loads without compromising performance, ensuring a smooth user experience and maintaining the integrity of the service. Whether you’re developing a new application or looking to enhance an existing one, implementing the right strategies from the outset can make all the difference. Here are some tips and techniques to help you build scalable web applications.

1. Design with Scalability in Mind

The foundation of a scalable web application starts with its architecture. Design your application with the assumption that it will grow. This means considering how each component can scale independently and how new components can be added seamlessly. Use a modular approach to make it easier to scale specific parts of your application without affecting others.

2. Choose the Right Technology Stack

Selecting the appropriate technology stack is critical for scalability. Look for technologies that are known for their performance and scalability. For example, Node.js is a popular choice for building scalable network applications due to its non-blocking I/O model. Similarly, containerization technologies like Docker and orchestration tools like Kubernetes can greatly simplify the process of scaling microservices.

3. Embrace Microservices

Microservices architecture allows you to break down your application into smaller, independent services that can be scaled individually. This approach offers several advantages, including the ability to make changes to one service without affecting others and deploying services independently. It also makes it easier to use the best technology for each service, tailored to its specific needs.

4. Optimize Database Performance

Databases are often the bottleneck in web applications. To ensure scalability, optimize your database queries, use caching mechanisms, and consider sharding or replication. NoSQL databases like MongoDB or Cassandra can be more suitable for high-scalability needs compared to traditional relational databases. Always analyze your data access patterns to choose the right database technology.

5. Implement Caching

Caching is a powerful technique to reduce the load on your servers and improve response times. By storing frequently accessed data in a fast in-memory cache like Redis or Memcached, you can serve content more quickly and reduce the number of trips to the database. Be strategic about what data you cache and for how long, to ensure data consistency and freshness.

6. Use Content Delivery Networks (CDNs)

For applications that serve a global audience, latency can be a significant issue. CDNs can help by storing static content (like images, CSS, and JavaScript files) on servers located around the world. This ensures that users can access these resources from a server that is geographically closer to them, reducing load times and improving performance.

7. Monitor and Analyze Performance

Building a scalable web application is an ongoing process. Continuously monitor your application’s performance and analyze user behavior to identify bottlenecks and areas for improvement. Tools like Google Analytics, New Relic, and Application Performance Monitoring (APM) solutions can provide valuable insights into how your application is performing and where it can be optimized.

8. Plan for Horizontal and Vertical Scaling

There are two primary methods of scaling: horizontal (scaling out) and vertical (scaling up). Horizontal scaling involves adding more machines to distribute the load, while vertical scaling means upgrading the existing hardware. Both methods have their pros and cons, and the best approach often depends on your specific needs and budget.

9. Automate Deployment and Scaling

Automation is key to managing scalable web applications efficiently. Use automated deployment tools like Jenkins or GitHub Actions to streamline the deployment process. For scaling, leverage cloud services that offer auto-scaling features, which can automatically adjust the number of servers based on demand.

10. Keep Learning and Stay Updated

The field of web development is constantly evolving, with new technologies and best practices emerging regularly. Stay informed about the latest trends in scalability and be ready to adapt your strategies as needed.

Conclusion

Building scalable web applications is a complex task that requires careful planning and execution. By following these tips and techniques, you can create applications that are robust, efficient, and capable of handling growth. Remember, scalability is not just about technology; it’s also about the processes and practices that ensure your application can evolve and thrive in a rapidly changing digital landscape

Text
louis-cadier-blog
louis-cadier-blog

Exploding 6 Myths About High Availability

1. Identical hardware is needed for high availability

Not anymore. Though gone are the days of IBM’s “Parallel Sysplex” with fault tolerance built on system firmware and even though hyperconverged solutions still require identical boxes to deliver high availability (unless complimented by a software appliance) today, where you have a virtual or physical Windows/Linux server, storage and network access you can deliver high availability with the same RPOs and RTOs in milliseconds but to/from dissimilar hardware.

2. Block-level, image based ‘availability’ will do

No and it never did. Using a block-level, image based backup tool to snapshot VMs every 5 mins consumes considerable system resource, is costly to scale and still only provides RPO and RTOs in minutes. Real time, byte-level replication can deliver more effective optimization of LAN/WAN with superior system performance so that failover can be automatic and genuinely instant.

3. High availability is only needed at enterprise organisations

Not these days. The larger the size of your company the greater the cost of downtimebut from your perspective that means nothing, minutes of downtime add up and in a competitive landscape can make the difference between winning and losing business, especially in the mid-market where IT services are delivered across locations and where the cost of downtime is significant enough to measure in seconds.

4. Hypervisor high availability is as good as it gets

No. Besides longer RPOs and RTOs with sync/recovery time in minutes, VMware or Hyper V high availability is restricted to its own hypervisor and out-of-the-box cannot detect application failure. In fact, application level high availability syncs/recovers immediately, provides cross-hypervisor compatibility, virtual to physical (and vice versa) support and detects application failure automatically.

5. High availability is expensive

You guessed it… not anymore. The liberation of high availability from identical hardware gives flexibility in how you provision it, redundant hardware is easier to refresh and allows for the repurposing (or tiering) of servers and storage. Virtualization drives the cost down again with public clouds like AWS, Azure and Arcserve Cloud offering different rates for cold and hot VMs so you only pay for high availability when you use it.

6. High Availability is a separate tool from backup

It’s a no. Arcserve UDP V6 allows users to manage their file based, image based and application level high availability from a single console. Users can turn on and off features according to priority and only pay for the features that they use; for mission critical services that would mean activating high availability as demonstrated here—->

For more information, please click here

Text
randomlearnings
randomlearnings

Posgres 9.6: remote_apply

http://michael.otacoo.com/postgresql-2/postgres-9-6-feature-highlight-remote-apply/

Postgres 9.6에 추가될 예정인 remote_apply 동기화 옵션. Hot standby 프로세스에서 읽기 트랜잭션을 시작했을 순간, 당시까지 마스터가 적용한 모든 트랜잭션의 데이터가 있음을 보장해준다고 한다.

Photo
relevancelab
relevancelab

How does your metric look like?

99.9999% (“six nines”)

31.5 sec Downtime/Year

2.59 sec Downtime/Month

604.8 ms Downtime/Week

https://en.wikipedia.org/wiki/High_availability 

photo
Text
xeektech
xeektech

Quality of Service (QoS)

This term signifies the overall performance of the network, as its seen by the users.
The following are some of the metrics are used to quantitatively determine QoS:

  • Error rates
  • Bandwidth
  • Throughput
  • Transmission delay
  • Availability
  • Jitter

References:

- http://en.wikipedia.org/wiki/Quality_of_service

- http://www.webopedia.com/TERM/Q/QoS.html

Link
yagitoshiro
yagitoshiro

Netflix のスケール -- Kosei Kitahara's Blog

Netflix のスケール -- Kosei Kitahara's Blog
surgo.jp
Text
alexanderpehm-blog-blog
alexanderpehm-blog-blog

Databaseless Oracle Service Bus Setup

In case you don’t have a DBMS that is certified for Oracle Service Bus and don’t want to risk breaking your support contract it’s possible to do a databaseless setup. However, this comes at a price. You’re not able to use the following features:

  • Reporting functionality
  • WSM policies
For a high availability setup, you also have to consider the following aspects (as most HA aspects, these are not OSB specific but WebLogic specific):
  • filestores for JMS destinations and the JTA logs for the transaction recovery service must be put on a shared storage solution to be available to all nodes that participate in a cluster configuration (typically, for this scenario, a SAN-based filestore will outperform a JDBC store anyway)
  • if you want to use the automatic server/service migration feature you must use consensus leasing as the migration basis (this means that the cluster master will maintain the leases for migratable singleton services in memory).
  • if you want to persist http sessions you’ll also have to use a filestore on a shared storage solution.

Text
jayjanssen
jayjanssen

A Glance at MySQL MHA

Yoshinori Matsunobu sparked some interest recently when he posted about his MySQL MHA HA solution, and there has been some discussion of it internally at Yahoo compared with the standard we currently have.  

Full disclosure:  I haven’t read every bit of the documentation or tried it out yet, so I apologize in advance to Yoshinori if I mistakenly represent his hard work. 

I see a lot of great ideas in Yoshinori’s release, it seems to focus on two main problems:

  1. A process to monitor an active master and perform a failover when it fails
  2. The bit that finds the most “caught-up” slave and distributes relay log contents from that slave to all the other slaves in the cluster

There tend to be many more pieces to performing a failover, but I’d agree that getting the slaves consistent and as up to date as possible is a great problem to solve.  I really think this is a big hole in what our current internal standard lacks, and I want to look into using Yoshinori’s code, or at least checking out his algorithms.

However, for our use there are some extra features that are crucial:

  • Multi-tiered/colo architectures, specifically dual-masters
  • Full HA on any HA solution.  This gets kind of meta, but the management server Yoshinori suggests needs to have redundancy.  The doc mentions running two management servers, but doesn’t elaborate on how that would actually work.  For example, do the management servers communicate?  What happens if they fall out of communication with each other?  That leads me to:
  • Being immune to split brain issues.  The doc mentions the idea of scripting a remote power off of a potentially network-isolated master, which is certainly one solution.  I don’t disagree with that, but I’m not necessarily convinced the overall solution could handle odd network situations in general.  Add multi-colo into that mix, and it gets even more complicated to think about.  

My take (so far) is, like many other open sourced “HA” solutions, it tends to focus on local availability and overlooks geographical redundancy (BCP, as we call it).

By no means am I trying to be critical about MySQL MHA, and I’m looking forward to see where MHA goes from here.  

Text
jayjanssen
jayjanssen

I don’t always enjoy working for a media company or particularly feel that tabloid-like news coverage jives with my personal philosophy, but it sure can be fun to help keep the site up under extreme traffic conditions like, say, royal weddings and the like.

Text
jayjanssen
jayjanssen

Why the Amazon EC2 outage doesn't invalidate cloud computing

Regardless of how much work any provider of servers (whether real or virtual) does to prevent it, colo outages happen.  This is what happened to EC2 yesterday, their Virginia datacenter went offline.  Does this make the “cloud” unreliable?  No, good Business Continuity practices are the same, it’s just that the cloud does not handle that for you.   (It’s a shock, I know)

What’s the solution?  Run your website out of more than 1 colo.  (Are you listening reedit and Foursquare?)  Just having backup servers ready outside the colo won’t let you get your site back up in any short length of time.  Hot/Hot is ideal, though certainly it can be more costly.  But what did that outage yesterday cost you?

So what about EC2?  Well, I’d don’t consider EC2 any more unreliable than any other hosting provider.  If you practice good HA and BCP, individual colo reliability is mostly irrelevant.  

However, is it conceivable that EC2 would experience a world-wide outage? I don’t see why not, though it may be much more unlikely than an individual colo outage.  Your best bet here is to run your website in multiple datacenters, in multiple geographical regions, from multiple providers.  For instance, Hot/Hot in EC2 and Rackspace Cloud would be pretty ideal.