Random things I’ve found during leetcode. This serves as a list of things I should either try to understand, or I finally did end up understanding. I’ll try to put the description of those things here.

  1. From this comment on a leetcode of the day
    1. KMP algorithm for finding a prefix in another string
    2. Z-algorithm, another pattern search algo
    3. Rabin-Karp algorithm, another string searching algo related to rolling hashes, which I swear I learned about at some point in college but now I don’t remember…
  2. From LC 2429:
    1. To unset the rightmost bit, use the following: x | (x - 1)
    2. To set the rightmost bit, use the following: x & (x + 1)
    3. This stuff feels like magic, but it makes a lot of sense when you work through the examples. To get an intuition of it, just write down a couple of examples and try it out
  3. From LC 1980, Cantor’s Diagonal Argument:
    1. Start with ans = 0.
    2. For each number at index i, we make the ith bit of ans the OPPOSITE of what it is in num.
    3. Since the size of nums == len of each num, we are guaranteed that the ans does not exist in the list of nums.
  4. Kinda a general concept but,
    1. Always think about reversing the input array / data structure in some way. There are some greedy solutions I’ve come across where I’m thinking about how I can solve something efficiently, but the answer is very obvious when things are reversed
  5. Difference Arrays are kinda cool, from LC 3356:
    1. if you have a range of increments / decrements you need to make, store them in an array and use a prefix sum:
let's say you want to have some ranges and each of these ranges need to be incremented by 1. 

ranges = [0, 3],[1, 2]

our array end state should be:
[1, 2, 2, 1]

The way we can outline this is with this single array:
step 1, process [0, 3]:
[0, 0, 0, 0, 0] -> [1, 0, 0, 0, -1]

step 2, process [1, 2]:
[1, 0, 0, 0, -1] -> [1, 1, 0, -1, -1]

step 3: perform prefix sum:

[1, 2, 2, 1, 0]

  1. Boyer-Moore majority voting algo from LC 2780:
    1. The concept is that when you need to find the majority item from an array, you can use the fact that the count of this majority item will always be larger than the remaining numbers in the array. Kinda reminds me of pigeonhole principle.
    2. example: [1,2,3,1,1] has a majority value of 1 because more than half the elements are of this value. The other elements can never have a count value greater than this.
    3. Pretty elegant — we stockpile the current guess of the majority until there’s no more left in the pile. Then, the next element takes the “majority” place until it’s “voted” out as well.
for n in nums:
	if count == 0:
		# a new majority candidate
		majority = n 
		count = 1
	elif n == majority:
		# same majority value, just got larger.
		count += 1
	else:
		# we still have a stockpile of our current majority.
		count -= 1
  1. Fast Exponentiation from LC 1922:
    1. If you needed to compute a ^ b efficiently and quickly, you want to minimize the number of multiplications you need to do.
    2. You leverage the power rule: a ^ b ^ c = a ^ b*c
    3. So let’s say you needed to compute 3 ^ 20. You can instead rewrite this as 9 ^ 10. 9 ^ 10 can be rewritten as 81 ^ 5, so on so forth.
    4. In this way, you reduce the work you do by a power of 2 at each step.
  2. Sieve of Eratosthenes:
    1. If you need to compute a bunch of prime numbers up to a limit you can use this method.
    2. Iterate from 1 to limit. If curr_num has not been marked yet, mark it as a prime. For multiples of curr_num until limit, mark the multiples as “non-prime”.
  3. General Sliding Window vs DP thought process from LC 2799:
    1. Sometimes, it feels like a question can be solved by sliding window, but you might miss certain cases. In the above question, a complete subarray is one where the num of distinct elements is the same as in the original list. Take the provided example: [1,3,1,2,2]. A subarray that may be missed by a generic sliding window approach is [3, 1, 2], since your right pointer may already be past index 3 when you slide your left in to index 1. In this case, consider the number of items you can add into your “result” before you slide your left pointer in. In this case, you can actually count the number of results as len(nums) - r. This leads you to a solution where you may not even need DP.