Ajeet
BAN USERLearn & Share
Here is source code ..
package org.ajeet.algorithms.dp;
public class Manacher {
private int[] p; // p[i] = length of longest palindromic substring of t, centered at i
private String s; // original string
private char[] t; // transformed string
public Manacher(String s) {
this.s = s;
t = preprocess(s);
p = new int[t.length];
int center = 0, right = 0;
for (int i = 1; i < t.length-1; i++) {
int mirror = 2*center - i;
if (right > i) p[i] = Math.min(right - i, p[mirror]);
// attempt to expand palindrome centered at i
while (t[i + (1 + p[i])] == t[i - (1 + p[i])])
p[i]++;
// if palindrome centered at i expands past right,
// adjust center based on expanded palindrome.
if (i + p[i] > right) {
center = i;
right = i + p[i];
}
}
}
// Transform s into t.
// For example, if s = "abba", then t = "$#a#b#b#a#@"
// the # are interleaved to avoid even/odd-length palindromes uniformly
// $ and @ are prepended and appended to each end to avoid bounds checking
public char[] preprocess(String s) {
char[] t = new char[s.length()*2 + 3];
t[0] = '$';
t[s.length()*2 + 2] = '@';
for (int i = 0; i < s.length(); i++) {
t[2*i + 1] = '#';
t[2*i + 2] = s.charAt(i);
}
t[s.length()*2 + 1] = '#';
return t;
}
// longest palindromic substring
public String longestPalindromicSubstring() {
int length = 0; // length of longest palindromic substring
int center = 0; // center of longest palindromic substring
for (int i = 1; i < p.length-1; i++) {
if (p[i] > length) {
length = p[i];
center = i;
}
}
return s.substring((center - 1 - length) / 2, (center - 1 + length) / 2);
}
// longest palindromic substring centered at index i/2
public String longestPalindromicSubstring(int i) {
i = i + 2;
int length = p[i];
int center = i;
return s.substring((center - 1 - length) / 2, (center - 1 + length) / 2);
}
// test client
public static void main(String[] args) {
String s = "aba";
Manacher manacher = new Manacher(s);
//Here is longest
System.out.println(manacher.longestPalindromicSubstring());
//All palindromes
for (int i = 0; i < 2*s.length(); i++)
System.out.println(i + ": " + manacher.longestPalindromicSubstring(i));
}
}
We can achieve it by using a circular doubly linked list and a HashMap.
HashMap will be used to track duplicate elements.
HashMap <key = url><value=ref of linked list's node> storedElements;
Algorithm:
1. add() :
check if it already exists(use HashMap) in list. if it is than just remove it from its old
location and add at current\tail.
If does not exists than add it in list at tail and add its reference in hashMap.
Complexity =O(1)
2. getMostRecentNelements()
return all elements from tail to head in reverse order.
Complexity = (N)
Space complexity = O(2N)
public static void perm2(String s) {
int N = s.length();
char[] a = new char[N];
for (int i = 0; i < N; i++)
a[i] = s.charAt(i);
perm2(a, N);
}
private static void perm2(char[] a, int n) {
int n = a.length;
if (n == 1) {
System.out.println(a);
return;
}
for (int i = 0; i < n; i++) {
swap(a, i, n-1);
perm2(a, n-1);
swap(a, i, n-1);
}
}
private static void swap(char[] a, int i, int j) {
char c;
c = a[i]; a[i] = a[j]; a[j] = c;
}
After modification ..
public static int find(String source){
int window = 0;
int start = 0;
int i =0;
HashMap<Character, Integer> map = new HashMap<Character, Integer>();
while( i < source.length()){
if(map.containsKey(source.charAt(i))){
int currentWindow = i-start;
if(window < currentWindow){
window = currentWindow;
}
start = map.get(source.charAt(i)) + 1;
//map.clear();
}
if(i+1 == source.length()){
int currentWindow = i-start +1;
if(window < currentWindow){
window = currentWindow;
}
return window;
}
map.put(source.charAt(i), i);
i++;
}
return 0;
}
@mindless monk
Ohhk i got your confusion . Just forgot this line:
"If an duplicate occurs than I just move start position to previous index of this character(Hash table has previous index)".
Editing feature is not available so i cant remove it . It was before modification in my algo. Just follow java implementation and visual representation.
"start pointer is never used to compare. It is "ï"."And as you can see we never go back with "i".
Just try visual representation of this algo .. i posted it using ... AABC".
@mindless monk
I did not get ur point ..."Hence, can attain O(n) in true sense. "
Above solution is also O(N) .. we are not moving back, we are using previous index just for size of window.
Can you explain how is ur solution .. O(N) ... in true sense or with true sense... ? In each iteration you are searching a location in map.
Iteration 0:
------------
|
"AABC"
window = 0;
sart =0;
i = 0
Iteration 1:
-------------
|
"AABC"
window = 1; (1 -0)
start = 1; duplicate occurs so move it
i = 1
Iteration 2:
-------------
|
"AABC"
window = 1;
start = 1;
i = 2
Iteration 3:
-------------
|
"AABC"
window = 3; (i - start) +1
start = 1;
i = 3 // we are reaching end of string so check for last
Visualization: Iteration 0:
------------
|
"AABC"
window = 0;
sart =0;
i = 0
Iteration 1:
-------------
|
"AABC"
window = 1; (1 -0)
start = 1; duplicate occurs so move it
i = 1
Iteration 2:
-------------
|
"AABC"
window = 1;
start = 1;
i = 2
Iteration 3:
-------------
|
"AABC"
window = 3; (i - start) +1
start = 1;
i = 3 // we are reaching end of string so check for last
Here is my approach:
It as time complexity O(N) and space complexity O(size of window)
I am using a hash table to track characters in current window.
If an duplicate occurs than I just move start position to previous index of this character(Hash table has previous index). And clear hash table, for next window.
Check if previous window size is smaller than (start - current pointer ), than just replace window with (start - current pointer ).
Here is implementation in java:
public static int find(String source){
int window = 0;
int start = 0;
int i =0;
HashMap<Character, Integer> map = new HashMap<Character, Integer>();
while( i < source.length()){
if(map.containsKey(source.charAt(i))){
int currentWindow = i-start;
if(window < currentWindow){
window = currentWindow;
}
start = i;
map.clear();
}
if(i+1 == source.length()){
int currentWindow = i-start +1;
if(window < currentWindow){
window = currentWindow;
}
return window;
}
map.put(source.charAt(i), i);
i++;
}
return 0;
}
Here is small visualisation of above problem: For "coobdafceeaxab" and "abc"
step-0:
Initial doubly linked-list = NULL
Initial hash-table = NULL
step-1:
Head<->[c,0]<->tail
hashTable[c] = [0, 'pointer to c node in LL']
step-2:
Head<->[c,0]<->[b,3]<->tail
hashTable[c] = [0, 'pointer to c node in LL'], h[b] = [3, 'pointer to b node in LL'],
Step-3:
Head<->[c,0]<->[b,3]<->[a,5]<->tail
hashTable[c] = [0, 'pointer to c node in LL'], h[b] = [3, 'pointer to b node in LL'], h[a] = [5, 'pointer to a node in LL']
Minimum Window => difference from tail and head => (5-0)+1 => Length: 6
Step-4:
Update entry of C to index 7 here. (Remove 'c' node from linked-list and append at the tail)
Head<->[b,3]<->[a,5]<->[c,7]<->tail
hashTable[c] = [7, 'new pointer to c node in LL'], h[b] = [3, 'pointer to b node in LL'], h[a] = [5, 'pointer to a node in LL'],
Minimum Window => difference from tail and head => (7-3)+1 => Length: 5
Found window for T in S.
S = "ADOBECODEBANC"
T = "ABC"
Algorithm:
--------------
1. Create a hash table of size "text.length" for all the characters from string T. (I assume all characters are different in Y). Set value for each character "-1".
2. count = T.length;
window = Integer.MAX_VALUE;//A larger value
for( 0 to "S.length")
if(S[i] match with T[i]) {
//if hash table is not populated with all values. or first window is not completed.
if(count > 0)
count--;
update its position(from S) in Hash table.
//If all values of hash table has been populated, or we should say first window was completed.
else
Find min and max values from the hash table: we can use a linked list to find min and max in O(1) time.
current_window = max - min;
if(window > current_window)
window = current_window;
if current window is smaller than previous one, than replace it.
remove old node for this character from the linked list.
}
add new count node in to linked list
update new count node in Hash table.
3. return window;
Note:
We are moving from left to right (0 to N), and removing old values from the list
So it will be sorted. And we its first node will be smallest and last node will be largest.
Here is implementation in java:
public static void find(String source, String text){
int min = -1;
int max = -1;
LinkedList<WindowNode> orderedList = new LinkedList<WindowNode>();
int count = text.length();
int[] window = new int[2];
window[0] = 0;
window[1] = Integer.MAX_VALUE;
HashMap<Character, WindowNode> map = new HashMap<Character, WindowNode>();
//Preprocess
for(int i=0; i<text.length(); i++){
map.put(text.charAt(i), new WindowNode(text.charAt(i), -1));
}
for(int i=0; i< source.length(); i++){
if(map.containsKey(source.charAt(i))){
//If first window is not filled
if(count > 0){
count--;
} else {
//If last window is bigger than current window than replace it with current window
if((window[1] - window[0]) > (orderedList.getLast().index - orderedList.getFirst().index)){
window[0] = orderedList.getFirst().index;
window[1] = orderedList.getLast().index;
}
orderedList.remove(map.get(source.charAt(i)));
}
WindowNode node = new WindowNode(source.charAt(i), i);
orderedList.addLast(node);
map.put(source.charAt(i), node);
}
}
System.out.println("Here is the smallest window: from " + window[0] + " to " + window[1]);
}
Time = O(N), where N = size of S
Space = O(2M), where M = size of T
If the file's size is enough to fit in memory than the solution will be straight forward ...
Use one Minheap<Integer> of size 10 and a HashMap <String, count>
heap = new minheap();
for each item
// if you don't already have k items on the heap, add this one
if (heap.count < k)
heap.Add(item)
else if (item.frequency > heap.Peek().frequency)
{
// The new item's frequency is greater than the lowest frequency
// already on the heap. Remove the item from the heap
// and add the new item.
heap.RemoveRoot();
heap.Add(item);
}
But if it a big file, a book etc. Than it is tricky to find top 10 words. I will use SPACE SAVING algorithm .
SPACESAVING(k)
n ← 0;
T ← ∅;
foreach i do
n ← n + 1;
if i ∈ T then //If word alreadyb exists increase its count
ci ← ci + 1;
else if |T| < k then //if T is less than F add new word
T ← T ∪ {i};
ci ← 1; //New words count will be 1
else
j ← arg minj∈T cj; //If T is full remove least frequent one, and add new one
ci ← cj + 1; // and add last removed words count in new word
T ← T ∪ {i}\{j};
Here is link of original research paper:
dimacs.rutgers.edu/~graham/pubs/papers/freqcacm.pdf
If the file's size is enough to fit in memory than the solution will be straight forward ...
Use one Minheap<Integer> of size 10 and a HashMap <String, count>
heap = new minheap();
for each item
// if you don't already have k items on the heap, add this one
if (heap.count < k)
heap.Add(item)
else if (item.frequency > heap.Peek().frequency)
{
// The new item's frequency is greater than the lowest frequency
// already on the heap. Remove the item from the heap
// and add the new item.
heap.RemoveRoot();
heap.Add(item);
}
But if it a big file, a book etc. Than it is tricky to find top 10 words. I will use SPACE SAVING algorithm .
SPACESAVING(k)
n ← 0;
T ← ∅;
foreach i do
n ← n + 1;
if i ∈ T then //If word alreadyb exists increase its count
ci ← ci + 1;
else if |T| < k then //if T is less than F add new word
T ← T ∪ {i};
ci ← 1; //New words count will be 1
else
j ← arg minj∈T cj; //If T is full remove least frequent one, and add new one
ci ← cj + 1; // and add last removed words count in new word
T ← T ∪ {i}\{j};
Here is link of original research paper:
dimacs.rutgers.edu/~graham/pubs/papers/freqcacm.pdf
@Kelvin
I missed to mention getNthElement(int N) method. here it is ...
I am using concept of heap sort to find Nth element ..But I am not sorting all elements. it will sort only N elements.
public int getNthElement(int N){
int result = 0;
for(int i =0; i<N; i++){
result = minHeap.removeRoot();
minHeap.heapify();
}
return result;
First of all we have boundary .. T(n< 2) = O(1) (ignore T(n<=1) = O(1)).
Second .. here i am calculating big O notation, not exactly number of iteration .
So if any case if we have one less or more step still we will consider it O(2^n).
Like in your test case it is O(2^n) -1 = 6 -1 = 5. Still its complexity will be O(2^n).
the complexity of a naive recursive fibonacci is indeed 2^n.
T(n<=1) = O(1) // Understood
T(n) = T(n-1) + T(n-2) + O(1)
Assume T(n-1) = O(2n-1), therefore
T(n) = T(n-1) + T(n-2) + O(1) which is equal to
T(n) = O(2n-1) + O(2n-2) + O(1) = O(2^n)
Here is another approach ...
T(n) = T(n-1) + T(n-2) = T(n-2) + T(n-3) + T(n-3) + T(n-4) =
= T(n-3) + T(n-4) + T(n-4) + T(n-5) + T(n-4) + T(n-5) + T(n-5) + T(n-6) = ...
in each step you call T twice, thus will provide eventual asymptotic barrier of: T(n) = 2 * 2 * ... * 2 = 2^n
A simpler approach, O(N2) time and O(1) space:
public static int find(String inputText) {
if (inputText == null) {
System.out.println("Input cannot be null!");
return 0;
}
// ODD Occuring Palindromes
int len = inputText.length();
int palindromes = len;
for (int i = 1; i < len - 1; i++) {
for (int j = i - 1, k = i + 1; j >= 0 && k < len; j--, k++) {
if (inputText.charAt(j) == inputText.charAt(k)) {
palindromes++;
System.out.println(inputText.subSequence(j, k + 1));
} else {
break;
}
}
}
// EVEN Occuring Palindromes
for (int i = 1; i < len - 1; i++) {
for (int j = i, k = i + 1; j >= 0 && k < len; j--, k++) {
if (inputText.charAt(j) == inputText.charAt(k)) {
palindromes++;
System.out.println(inputText.subSequence(j, k + 1));
} else {
break;
}
}
}
return palindromes;
}
Observation from the info provided in question:
1. If I do not remember the order in which I visited these places, than off course i dont remember the starting location.
2. But I know my current location - end of my journey.
3. End place of one journey is starting place for another journey.
So on the basis of these observation here is my algorithm:
String[] places = new [N+1]; // N-1 links required to connect N places
Step 1: Store all tickets in a HashMap with end location as key and ticket as value.
It will take O(N) time and O(N) space. Where N = number of tickets
Step 2: String endLocation= current_location;
location = N+1;
for( ticket 1 to N) {
Ticket ticket = map.getTicket(endLocation)// from HashMap.
places[location--] = endLocation;
endLocation = ticket.getStartPlace();
}
places[location--] = ticket.getStartPlace();
Step 3: return places; // it is an array of places in visited sequence.
If i am not missing any boundary condition, I dont think it required any special algorithm to find it in O(logN)....
int maxZeroLen(int N)
{
int maxLen = 0;
int curLen = 0;
while(N)
{
int remainder = N % 2;
N = N >> 2;
if(!remainder)
curLen++;
else curLen = 0;
maxLen = max(maxLen, curLen);
}
return maxLen;
}
Here is straight forward and simple solution ....no magic behind this... just putting all unique sums in a min heap.. and returning 3rd element from the heap.
public static int getNthElement(int[] A, int[] B){
MinHeap<Integer> maxHeap = new MinHeap<Integer>();
int m = A.length;
int n = B.length;
int maxLength = Math.max(m,n);
for(int i = 0; i < m; i++){
for(int j = 0; j < n; j++){
maxHeap.insert(A[i] + B[j]);
}
}
return maxHeap.getNthElement(3);
}
@digjim
Thanks pointing that out ... this approach will not work. I tried to reduce the number of iteration. It will work with all unique sums in heap. No magic behind this.
public static int getNthElement(int[] A, int[] B){
MinHeap<Integer> maxHeap = new MinHeap<Integer>();
int m = A.length;
int n = B.length;
int maxLength = Math.max(m,n);
for(int i = 0; i < m; i++){
for(int j = 0; j < n; j++){
maxHeap.insert(A[i] + B[j]);
}
}
return maxHeap.getNthElement(3);
}
I will require that user should play some music. It will give me the information about the user's music interest like - artist, music type (jazz .. etc), details of melody and harmony, rhythm and instrumentation etc.
I will use this information as a "music attributes\characteristics" to describe a song. And it will be used to select some songs from the library for the recommendation.
And songs will be classified in the database.
Each characteristic of a song has different score, and sum of scores of all characteristics give score of a song. Now on the basis of scores songs are sorted for the recommendation.
"Item based prediction algorithm" is the best match to calculate the score of a song.
Class ICommand
execute();
//All command flags should be listed here,
//execute method will work on the basis
//of flag value
enum CommandTypes
POWER_OFF,
POWER_ON,
-
-
Class Command implements ICommand
up()
down()
off()
on()
Class VideoCommand extends Command
changeVideoSettings();
Class AudioCommand extends Command
changeAudioSettings();
public class Button
Command c;
Button(Command c);
click(); // Click will execute command
Class Remote
Button videoButton;
Button audioButton;
powerOn();
powerOff();
Sorry edit facility is not avilable so i am submitting it again .... here is with minor fix....it was not working for N=1, O = 3
public static List<Integer> findFactors(int N, int O){
int x = 1;
int tmp = 1;
for(int i = 1; i <= N; i++){
x = x*10;
tmp = tmp*2;
}
x = x-1;
//System.out.println("This is the number: " + x);
for(int i = x; x >= tmp; i--){
List<Integer> result = check(i, O) ;
if(result != null){
System.out.println("This is the number: " + i);
return result;
}
}
throw new RuntimeException("It does not match the requirement");
}
private static List<Integer> check(int Y, int N){
int tmp = Y;
List<Integer> factors = new ArrayList<Integer>(N);
for(int X = 2; X< 9; X++){
for(int i = 0; i< N; i++){
if(tmp%X != 0){
factors.clear();
tmp = Y;
break;
}
factors.add(X);
tmp = tmp/X;
if(tmp >= 2 && tmp <= 9 && factors.size() == 2){
factors.add(tmp);
return factors;
}
}
}
//System.out.println(factors);
return null;
}
public static void main(String[] args) {
System.out.println(findFactors(2,3));
}
Here is the implementation in java ....
I am using reverse approach. Diving the given numbers (in digits) and adding all divisors in result set..if we have more or less divisors than it is a failures.
public static List<Integer> findFactors(int N, int O){
int x = 1;
int tmp = 1;
for(int i = 1; i <= N; i++){
x = x*10;
tmp = tmp*2;
}
x = x-1;
//System.out.println("This is the number: " + x);
for(int i = x; x >= tmp; i--){
List<Integer> result = check(i, O) ;
if(result != null){
System.out.println("This is the number: " + i);
return result;
}
}
throw new RuntimeException("It does not match the requirement");
}
private static List<Integer> check(int Y, int N){
int tmp = Y;
List<Integer> factors = new ArrayList<Integer>(N);
for(int X = 2; X< 9; X++){
for(int i = 0; i< N; i++){
if(tmp%X != 0){
factors.clear();
tmp = Y;
break;
}
factors.add(X);
tmp = tmp/X;
if(tmp >= 2 && tmp <= 9){
factors.add(tmp);
if(factors.size() == 3){
return factors;
}
break;
}
}
}
//System.out.println(factors);
return null;
}
public static void main(String[] args) {
System.out.println(findFactors(2,3));
}
We need to find 3rd element, it will be 3rd in row (if row elements are smaller than columns - like in this question) and similar for column, if column's elements are smaller than rows.
If we have duplicates than we have to check for bigger array in next iteration.
Sorry i missed the method (it utilize Manacher algo to count all distinct palindromes)..
- Ajeet December 12, 2013