## Interview Question

Software Engineer / Developers**Country:**Canada

This is what I came up with but it took me a few tries and help from interviewer.

```
public static void printPairs (List<Integer> integerList, int sum)
{
Collections.sort(integerList);
int start = 0;
int end = integerList.size() - 1;
int summation;
while (start < end)
{
summation = integerList.get(start) + integerList.get(end);
if(summation == sum)
{
System.out.print("[" + integerList.get(start) + ", " + integerList.get(end) + "]");
start++;
end--;
}
else if(summation < sum)
start++;
else
end--;
System.out.println();
}
}
```

I believe that your solution is correct as long as you are under the condition that all of the numbers are distinct. If the numbers can be repeated more than once, then you need to rework your algorithm to account for multiples of the same entry near the head or tail of your list.

Also, you might want to have the return type of the function be a Collection/List that contains the values instead of only printing them (unless interviewer told you that was ok).

Lastly, you should show complete mastery of your solution by talking about the run-time complexity and the space complexity. In your solution it appears that the space is O(n) [Collections.sort() provides O(n/2) which truncates to O(n) asymptotic]. The run time is O(n log n) amortized due to the call to Collections.sort(). Note that the javadoc claims that partially sorted lists can get almost O(n) run time, but that is a rather brave assumption. We always work from a worst-case analysis or overall amortization in the case of random pivoting (like merge and quick sort).

All in all, I like your answer as your function could theoretically run in O(1) space and O(n) run time if the list is distinct and sorted prior to being passed to your function. Again, explaining these subtle differences will prove that you have full mastery and understanding of what is going on under the hood.

perfect. Already sorted array on O(n) time.

Question: Is there any way that this problem can be solved in O(n) time if the array is not sorted?

Another way is to use contains(100 - integerI) right after you've got your first number, but in this case you'll have duplicate results that are to be filtered as well as it will result true even if {50}. So i guess its best to stick to this one, since it's less verbose and more clear

```
public static void get100(List<Integer> integers) {
for (int i = 0; i < integers.size() - 1; i++) {
int integerI = integers.get(i);
for (int j = i + 1; j < integers.size(); j++) {
int integerJ = integers.get(j);
if (integerI + integerJ == 100) {
System.out.println(integerI + ", " + integerJ);
}
}
}
}
```

```
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
public class Code {
public static void main(String[] args) {
Code c = new Code();
c.findCenturyPairs(Arrays.asList(1,3, 5, 99, 98, 70, 97));
}
private static void findCenturyPairs(List l) {
ArrayList<Integer> al = (l instanceof java.util.ArrayList<?>)?(ArrayList<Integer>)l:new ArrayList<Integer>(l);
for (int i = 0; i < al.size(); i++) {
for (int j=i+1; j < al.size(); j++) {
if (al.get(i) + al.get(j) == 100) System.out.print("[" + i + "," + j + "]");
}
}
}
}
```

```
/*
* Write a function that takes a list of positive integers as an input, and returns all of the pairs of integers it contains that sum to 100.
* You can assume that all inputs are between 1 and 99.
*/
public int[][] findPairs(int[] array){
//HashSet for keeping track of differences
HashSet<Integer> match = new HashSet<Integer>();
ArrayList<ArrayList<Integer>> result = new ArrayList<ArrayList<Integer>>();
for(int i : array){
if(match.contains(100-i)){
ArrayList<Integer> pair = new ArrayList<Integer>();
pair.add(i);
pair.add(100-i);
result.add(pair);
}
else{
match.put(100-i);
}
}
return result;
}
```

```
def sum_hundred(input):
checker_list = [0] * 100
for number in input_list:
checker_list[number] += 1
for index, number in enumerate(checker_list):
while(number):
number -= 1
while(checker_list[100 - index]):
print index, 100 - index
checker_list[100 - index] -= 1
if __name__ == '__main__':
sum_hundred(input_list)
```

```
#include <iostream>
#include <vector>
int main(int argc, char* argv[]) {
std::vector<int> list;
list.push_back(1);
list.push_back(3);
list.push_back(5);
list.push_back(99);
list.push_back(98);
list.push_back(70);
list.push_back(97);
std::sort(list.begin(), list.end());
int l=0;
int h=list.size()-1;
while(l <= h) {
if (list[l]+list[h] == 100) {
std::cout << list[l] << " " << list[h];
++l;
} else if(l+h > 100) {
--h;
} else {
++l;
}
}
}
```

The problem with the code using nested loop is that it has a complexity of O(n^2).

Therefore, we can get better. How? Use an O(nlogn) sorting algorithm and then traverse the array from the beginning and from the end with 2 variables, checking in linear time if the sum is 100.

Thus, we will get a complexity of O(nlogn) which is better.

One general set of oft overlooked algorithms involves just allocating a lot of memory and directly storing things or setting Booleans. This sort of algorithm can work on fairly large data sets as computers are quite big now days. (Though you do have some cash concerns). As your interviewer may have been in the industry a few moors law cycles he or she will not have considered such a solution for some problems that were just a little to large a few years ago and be quite amazed. A friend of mine proposed a solution that required 2^32 Booleans and the interviewer was blown away if one can believe him. And he did it on 2 different interviews for the same problem with the same result. They were probably looking for some elaborate data structure. Some professor I had once mentioned a similar solution that must have been thought of by a consultant for a real world problem back in the 70s or 60s. Consider it for other problems.

In this case if you don’t have duplicates and want to just return the value just use a vector of Booleans.

Set all elements of store to false

For each n

m = 100 –n

if store[m] print m n

store[n] = true

If you had to return the indexes and there were duplicates then use essentially a degenerate hash table with links to store the different indexes of the items. This hash table is degenerate in that it would have 100 buckets and do nothing to the value to get the hash. Just an array to the link list of indexes or for that matter just a vector of vectors of indexes. You could use a real hash table if the number was really large though.

One general set of oft overlooked algorithms involves just allocating a lot of memory and directly storing things or setting Booleans. This sort of algorithm can work on fairly large data sets as computers are quite big now days. (Though you do have some cash concerns). As your interviewer may have been in the industry a few moors law cycles he or she will not have considered such a solution for some problems that were just a little to large a few years ago and be quite amazed. A friend of mine proposed a solution that required 2^32 Booleans and the interviewer was blown away if one can believe him. And he did it on 2 different interviews for the same problem with the same result. They were probably looking for some elaborate data structure. Some professor I had once mentioned a similar solution that must have been thought of by a consultant for a real world problem back in the 70s or 60s. Consider it for other problems.

In this case if you don’t have duplicates and want to just return the value just use a vector of Booleans.

Set all elements of store to false

For each n

m = 100 –n

if store[m] print m n

store[n] = true

If you had to return the indexes and there were duplicates then use essentially a degenerate hash table with links to store the different indexes of the items. This hash table is degenerate in that it would have 100 buckets and do nothing to the value to get the hash. Just an array to the link list of indexes or for that matter just a vector of vectors of indexes. You could use a real hash table if the number was really large though and the values were sparse .

I would say the following is the most optimal.

```
typedef ::std::pair<int, int> IntPair;
typedef ::std::vector<IntPair> IntPairVector;
typedef ::std::list<int> IntList;
void pairs(IntList list, IntPairVector & result)
{
result.clear();
// make number map, O(N)
::std::vector<bool> map(100, false);
for (auto iter : list)
{
map[iter] = true;
}
// all numbers from [1, 99], O(1)
for (int i = 1; i <= 99; ++i)
{
const int j = 100 - i;
if (map[i] && map[j]) // do we have this number?
{
result.push_back(IntPair(i, j));
}
}
}
```

However, does your code pass these inputs :)

{50}

{50,50,50}

If we have to print all possible index pairs, the complexity is O(n^2) in worst case.

I don't use any std map. I just iterate once over all list.

It's not big deal to support your special cases and still having O(N). The following code is the solution:

```
typedef ::std::pair<int, int> IntPair;
typedef ::std::vector<IntPair> IntPairVector;
typedef ::std::list<int> IntList;
void pairs(IntList list, IntPairVector & result)
{
result.clear();
// make number map, O(N)
::std::vector<int> map(100, 0);
for (auto iter : list)
{
++map[iter];
}
// all numbers from [1, 99], O(1)
for (int i = 1; i <= 99; ++i)
{
const int j = 100 - i;
const int cnti = map[i];
if (i != j)
{
const int cntj = map[j];
result.insert(result.end(), cnti * cntj, IntPair(i, j));
}
else if (2 <= cnti) // i == j
{
result.insert(result.end(), cnti * (cnti - 1) / 2, IntPair(i, j));
}
}
}
```

This is my take on the problem using c++:

- ekiz.mertcan October 22, 2014