+ The glossary term you are looking for doesn't exist or has been + moved. +
++ Our glossary covers key algorithmic concepts like Big O notation, + time complexity, space complexity, and more. +
+
+ {term.codeExample}
+
+ + {relatedTerm.shortDefinition} +
+ + ))} ++ Want to see these concepts in action? +
+No terms found matching your search.
+ ) : ( ++ {filteredTerms.length}{" "} + {filteredTerms.length === 1 ? "term" : "terms"} found +
+ )} ++ {term.shortDefinition} +
+An algorithm is a finite sequence of well-defined, computer-implementable instructions, typically used to solve a class of problems or to perform a computation.
+ +Algorithms are unambiguous specifications for performing calculations, data processing, automated reasoning, and other tasks. They form the foundation of everything we do in computer science and programming.
+ +A good algorithm generally has the following characteristics:
+Algorithms can be expressed in many ways, including natural language, pseudocode, flowcharts, and programming languages. The efficiency of algorithms is typically measured in terms of their time complexity (how long they take to run) and space complexity (how much memory they require).
+ `, + examples: [ + "Sorting algorithms arrange items in a specific order (e.g., Bubble Sort, Quick Sort)", + "Search algorithms locate items within a data structure (e.g., Binary Search)", + "Graph algorithms find paths, connectivity, or properties of graphs (e.g., Dijkstra's algorithm)", + "String matching algorithms find patterns in text (e.g., Boyer-Moore algorithm)", + ], + relatedTerms: [ + "big-o-notation", + "time-complexity", + "space-complexity", + "pseudocode", + ], + keywords: ["computation", "procedure", "process", "method", "technique"], + }, + { + slug: "big-o-notation", + term: "Big O Notation", + category: "analysis", + shortDefinition: + "A mathematical notation that describes the limiting behavior of a function when the argument tends towards infinity.", + fullDefinition: ` +Big O Notation is a mathematical notation used in computer science to describe the performance or complexity of an algorithm. Specifically, it describes the worst-case scenario and can be used to describe the execution time or space requirements of an algorithm.
+ +The "O" in Big O notation stands for "Order of," which indicates the rate of growth of an algorithm. This notation characterizes functions according to their growth rates: different functions with the same growth rate may be represented using the same O notation.
+ +When analyzing algorithms, we're primarily concerned with how they scale - that is, how their performance changes as the input size grows. Big O notation allows us to express this relationship mathematically, ignoring constants and lower-order terms that become insignificant with large inputs.
+ +Time complexity is a computational concept that measures the amount of time an algorithm takes to complete as a function of the length of the input. It provides a way to express how the runtime of an algorithm grows as the size of the input increases.
+ +When analyzing time complexity, we're usually concerned with the worst-case scenario—the maximum amount of time an algorithm could take given any valid input of size n. However, we sometimes also consider:
+ +Time complexity is commonly expressed using Big O notation, which gives us an asymptotic upper bound on the growth rate of the algorithm's runtime. This means we focus on how the algorithm scales with large inputs, rather than the exact number of operations for a specific input size.
+ +When calculating time complexity, we generally follow these principles:
+Space complexity is a measure of the amount of memory or storage space that an algorithm requires as a function of the input size. It quantifies how much additional memory the algorithm needs to complete its execution beyond the space needed to store the input.
+ +When analyzing space complexity, we consider:
+ +Like time complexity, space complexity is typically expressed using Big O notation, representing the worst-case space usage. This helps us understand how an algorithm's memory requirements scale with larger inputs.
+ +Space complexity considers various memory usages:
+An important concept related to space complexity is the distinction between "in-place" algorithms (which use O(1) auxiliary space) and algorithms that require significant additional space proportional to the input size.
+ `, + examples: [ + "O(1) - Constant space: Algorithms that use a fixed amount of extra space regardless of input size (like iterative implementations of many algorithms)", + "O(log n) - Logarithmic space: Often seen in recursive implementations of divide and conquer algorithms (the call stack uses logarithmic space)", + "O(n) - Linear space: Algorithms that use extra space directly proportional to input size (like creating a new array of the same size)", + "O(n²) - Quadratic space: Algorithms that create data structures with size proportional to n² (like adjacency matrices for graphs)", + ], + relatedTerms: ["big-o-notation", "time-complexity", "in-place-algorithm"], + keywords: [ + "memory usage", + "storage requirements", + "auxiliary space", + "in-place", + "memory complexity", + ], + }, + { + slug: "sorting-algorithm", + term: "Sorting Algorithm", + category: "algorithms", + shortDefinition: + "Algorithms that arrange elements in a specific order, typically ascending or descending.", + fullDefinition: ` +Sorting algorithms are procedures that arrange elements in a specific order, typically in ascending or descending order based on a comparison operator. They are fundamental algorithms studied in computer science and are essential in many applications.
+ +Sorting algorithms can be categorized in several ways:
+ +The efficiency of sorting algorithms is typically measured by their time complexity (how fast they run) and space complexity (how much memory they use). The best theoretical time complexity for comparison-based sorting is O(n log n), though specialized algorithms can perform better in specific scenarios.
+ +Searching algorithms are methods designed to find an item or items with specified properties within a collection of data. These algorithms are fundamental in computer science and have numerous applications, from finding records in databases to locating specific elements in arrays.
+ +Search algorithms can be broadly classified into two categories:
+ +The efficiency of searching algorithms is typically measured by their time complexity, which indicates how the search time increases with the size of the data. Some searches can be optimized by using specialized data structures like hash tables, which provide constant-time access on average.
+ +An in-place algorithm is an algorithm that transforms the input data structure without using extra data structures for storage. It operates directly on the input, modifying it as necessary, while using only a constant amount of extra space for variables.
+ +The key characteristic of in-place algorithms is their space efficiency. They typically have a space complexity of O(1) or constant space, meaning the amount of additional memory they use doesn't grow with the size of the input.
+ +In-place algorithms are particularly valuable in environments with limited memory resources, when working with very large datasets, or when memory allocation and deallocation operations are expensive.
+ +Some algorithms can be implemented in either in-place or out-of-place variants, with different trade-offs in terms of simplicity, speed, and memory usage. In-place versions generally save space but might be more complex or slightly slower in some cases.
+ +Divide and Conquer is a problem-solving paradigm that breaks a problem into smaller, similar subproblems, solves each subproblem independently, and then combines these solutions to create a solution to the original problem.
+ +This algorithmic approach follows three main steps:
+ +Divide and conquer algorithms are often implemented using recursion, though iterative implementations are also possible. They are particularly useful for problems that can be broken down into independent, similar subproblems.
+ +This approach has several advantages:
+Asymptotic analysis is a method for describing the efficiency of algorithms by analyzing their performance as the input size grows towards infinity. Rather than focusing on the exact number of operations, it characterizes algorithm performance in terms of how the resource usage (time or space) scales with input size.
+ +Key principles of asymptotic analysis include:
+ +Asymptotic analysis typically uses three main notations:
+ +This approach allows us to compare algorithms independently of implementation details, hardware, or specific inputs, focusing instead on their fundamental efficiency characteristics.
+ `, + examples: [ + "An algorithm that performs n² + 3n + 1 operations has an asymptotic complexity of O(n²)", + "Binary search has a time complexity of O(log n) because the number of operations is proportional to the logarithm of the input size", + "The asymptotic space complexity of merge sort is O(n) because it requires additional storage proportional to the input size", + ], + relatedTerms: [ + "big-o-notation", + "time-complexity", + "space-complexity", + "algorithm-analysis", + ], + keywords: [ + "growth rate", + "computational complexity", + "algorithm analysis", + "efficiency", + ], + }, + { + slug: "recursion", + term: "Recursion", + category: "techniques", + shortDefinition: + "A programming technique where a function calls itself to solve smaller instances of the same problem.", + fullDefinition: ` +Recursion is a programming technique in which a function calls itself directly or indirectly to solve a problem. Each recursive call addresses a smaller instance of the same problem, moving toward a base case that can be solved without further recursion.
+ +A recursive algorithm typically consists of two essential parts:
+ +Recursion naturally maps to problems that can be broken down into smaller, similar subproblems, particularly those with a hierarchical or nested structure. It often leads to elegant and concise solutions to complex problems.
+ +While recursion can be powerful and intuitive, it has some limitations:
+Techniques like tail recursion, memoization, and dynamic programming can help address these limitations in many cases.
+ `, + codeExample: `// Recursive factorial function +function factorial(n) { + // Base case + if (n === 0 || n === 1) { + return 1; + } + // Recursive case + return n * factorial(n - 1); +} + +// Recursive fibonacci function +function fibonacci(n) { + // Base cases + if (n <= 0) return 0; + if (n === 1) return 1; + + // Recursive case + return fibonacci(n - 1) + fibonacci(n - 2); +}`, + examples: [ + "Factorial calculation: factorial(n) = n * factorial(n-1), with factorial(0) = 1", + "Fibonacci sequence: fibonacci(n) = fibonacci(n-1) + fibonacci(n-2), with fibonacci(0) = 0 and fibonacci(1) = 1", + "Binary tree traversal: Process the current node, then recursively process left and right subtrees", + "Merge sort: Recursively sort the two halves of an array, then merge them", + ], + relatedTerms: ["divide-and-conquer", "dynamic-programming", "algorithm"], + keywords: [ + "self-reference", + "recursive functions", + "call stack", + "base case", + ], + }, + { + slug: "dynamic-programming", + term: "Dynamic Programming", + category: "techniques", + shortDefinition: + "A method for solving complex problems by breaking them down into simpler subproblems and storing their solutions to avoid redundant computations.", + fullDefinition: ` +Dynamic Programming (DP) is a technique for solving complex problems by breaking them down into simpler overlapping subproblems and storing the solutions to these subproblems to avoid redundant calculation. It's particularly useful for optimization problems where the goal is to find the best solution among many possible options.
+ +Two key properties typically characterize problems suited for dynamic programming:
+ +Dynamic programming can be implemented using two main approaches:
+ +The primary advantage of dynamic programming is that it can dramatically improve the efficiency of algorithms for problems with overlapping subproblems, often reducing exponential time complexity to polynomial time.
+ `, + codeExample: `// Fibonacci using dynamic programming (memoization) +function fibonacciDP(n, memo = {}) { + // Check if we've already computed this value + if (n in memo) return memo[n]; + + // Base cases + if (n <= 0) return 0; + if (n === 1) return 1; + + // Store and return the result + memo[n] = fibonacciDP(n - 1, memo) + fibonacciDP(n - 2, memo); + return memo[n]; +} + +// Fibonacci using dynamic programming (tabulation) +function fibonacciTabulation(n) { + if (n <= 0) return 0; + if (n === 1) return 1; + + // Create an array to store values + const dp = new Array(n + 1); + dp[0] = 0; + dp[1] = 1; + + // Fill the array + for (let i = 2; i <= n; i++) { + dp[i] = dp[i - 1] + dp[i - 2]; + } + + return dp[n]; +}`, + examples: [ + "Fibonacci sequence calculation with memoization", + "Knapsack problem: Finding the most valuable combination of items that fit within a weight constraint", + "Longest Common Subsequence: Finding the longest sequence common to two sequences", + "Shortest path algorithms like Floyd-Warshall", + ], + relatedTerms: ["recursion", "memoization", "algorithm", "optimization"], + keywords: [ + "subproblems", + "optimization", + "memoization", + "tabulation", + "optimal substructure", + ], + }, +]; + +// Function to get a glossary term by its slug +export function getGlossaryTermBySlug(slug: string): GlossaryTerm | undefined { + return glossaryTerms.find((term) => term.slug === slug); +} + +// Function to get related terms for a specific term +export function getRelatedTerms(slug: string): GlossaryTerm[] { + const term = getGlossaryTermBySlug(slug); + if (!term || !term.relatedTerms || term.relatedTerms.length === 0) { + return []; + } + + return term.relatedTerms + .map((relatedSlug) => getGlossaryTermBySlug(relatedSlug)) + .filter((term): term is GlossaryTerm => term !== undefined); +} + +// Function to search for glossary terms +export function searchGlossaryTerms(query: string): GlossaryTerm[] { + const lowercaseQuery = query.toLowerCase(); + return glossaryTerms.filter( + (term) => + term.term.toLowerCase().includes(lowercaseQuery) || + term.shortDefinition.toLowerCase().includes(lowercaseQuery) || + term.category.toLowerCase().includes(lowercaseQuery) || + term.keywords?.some((keyword) => + keyword.toLowerCase().includes(lowercaseQuery) + ) + ); +} diff --git a/lib/utils.ts b/lib/utils.ts index fbfd39b..903ef17 100644 --- a/lib/utils.ts +++ b/lib/utils.ts @@ -14,3 +14,23 @@ export function getRandomValueFromArray(array: number[]): number { const randomIndex = Math.floor(Math.random() * array.length); return array[randomIndex]; } + +// Glossary utility functions +export function sortByAlphabet